What's new in Informix

IBM® Informix®12.10 has new, changed, and removed features.

What's new in IBM Informix version 12.10.xC14

What's new in IBM Informix version 12.10.xC13

What's new in IBM Informix version 12.10.xC11

What's new in IBM Informix version 12.10.xC10

What's new in IBM Informix version 12.10.xC9

What's new in IBM Informix version 12.10.xC8

What's new in IBM Informix version 12.10.xC7

What's new in IBM Informix version 12.10.xC6

What's new in IBM Informix version 12.10.xC5

What's new in IBM Informix version 12.10.xC4

Migration
Installation
Administration
Replication
Performance
Application development
JSON compatibility
Time series
Security

What's new in IBM Informix version 12.10.xC3

Migration
Installation
Server configuration
Administration
Autonomics
Performance
Connectivity
Application development
JSON compatibility
Foreign-key constraints
Compatibility
High-availability clusters and Enterprise Replication
Connection Manager
Monitoring
Configuration
Sharding
Spatial data
Time series data
Storage
Containers
Faster queries
Warehousing
Additional types of data

What's new in IBM Informix version 12.10.xC2

Installation
Migration
Administration
Error handling
IBM OpenAdmin Tool (OAT) for Informix
Back up and restore
Configuration
Performance
Faster operations
Dynamic tuning
Troubleshooting
Application development
JSON compatibility
New syntax for queries
Basic text searching
Client products
New default
Enterprise Replication
Easier setup
Configuration
Sharding
Time series data
Enhancements
Faster queries
Warehousing
Additional types of data
Security

What's new in IBM Informix version 12.10.xC1

Installation and migration
Client SDK
Migration
Application development
Manage data storage
Enhancements for warehousing and business intelligence queries
Faster queries
Result tables
SPL enhancements
Simpler and more compatible SQL syntax
Administration
Compression options
Back up and restore
Ease of use and embeddability
OAT usability enhancements
Performance
Extensibility improvements
High availability and Enterprise Replication
Grid enhancements
Enhancements to defining replicates for Enterprise Replication
Replicate more types of data with Enterprise Replication
Faster and more reliable high-availability clusters
Faster replication
Other improvements to Enterprise Replication
Informix Warehouse Accelerator
Refresh data in data warehouses
More accelerated queries
Administer the accelerator
Improved security
Time series
Manage time series data
Speed up data loading
Customize data loading
Removed features and products
ON-Monitor
IBM Informix Storage Manager (ISM)
Optical subsystem
Geodetic DataBlade module

What's new in IBM Informix version 12.10.xC14

InformixHQ

  • Customization
  • Ease of use
  • Administration

This feature is documented in the InformixHQ Guide.

GSKit Upgrade

  • Upgraded to version 8.0.55.12.

Java(TM) SE Runtime Environment Upgrade

  • Upgraded to version 7.1.4.55 for Linux Power Series (little endian).
  • Upgraded to version 7.0.10.45 for HP.
  • Upgraded to version 7.0.10.55 for other platforms.

What's new in IBM Informix version 12.10.xC13

InformixHQ

Informix HQ is a new web-based tool which replaces Informix Open Admin Tool (OAT) and provides many requested features:

  • Modernized web console for monitoring, alerting, and administering Informix database servers.
  • Provides critical performance management capabilities, monitoring how key performance metrics are changing over time and tracking how efficiently Informix is running your workload even when you’ve stepped away from your screen.
  • Customizable alerting system so you can be immediately alerted through modern IT infrastructures like Pager Duty, Twilio, and email.
  • Purpose built for scaling out and optimizing DevOps needs.
  • Accessible from any desktop, laptop, or mobile device.

This feature is documented in the InformixHQ Guide.

LOGBUF_INTVL configuration parameter

New config parameter LOGBUF_INTVL is added to ensure the logical log buffer is flushed periodically when only buffered logging is used.

This feature is documented in the Administrator's Reference Guide.

onstat -g command enhancement

New filters are added to the onstat -g ses, onstat -g sql, and onstat -g pqs commands to display information about threads that are in particular states of interest.

This feature is documented in the Administrator's Reference Guide.

What's new in IBM Informix version 12.10.xC11

Drop Empty Chunks
  • The server will not transition to on-line mode if it has off-line chunks that do not have an on-line mirror. If chunk is empty, you can safely unblock the server, or you can drop chunk and bring the server on-line. The "onstat -d" output adds footnotes for each of the down chunks that are empty, and a special foot note for the first chunk of a down space that is empty.

This feature is documented in the Informix Administrator's Reference Guide.

Loopback replication
  • With loopback replication support, you can setup replication between tables within the same Informix server. Source and target tables can be in the same database or on two different databases within the same server.

This feature is documented in the Enterprise Replication Guide.

UTF8 - UTF8 GLU Migration

  • GL_USEGLU can be set to 1 or 4. On setting the variable, restart the Informix instance. It affects the databases with a locale encoding of either gb18030-2000 or utf8.

This feature is documented in the GLS User's Guide.

Enhanced TEMP DBspace Resiliency

Whenever you start the database server, all chunks in temporary dbspaces are recreated from scratch. These chunks can therefore be located on RAM drives if desired.

This feature is documented in the Informix Administrator's Guide.

Change in Transport Layer Security default value

  • The TLS version 1.0 and 1.1 are no longer the default values but are available with explicit coding in the onconfig file. TLS version 1.2 is enabled by default.

This feature is documented in the Informix Administrator's Reference Guide.

What's new in IBM Informix version 12.10.xC10

Backup to Cloud
  • The Primary Storage Manager of Informix Dynamic Server 12.10.xC10 now has the capability to store and retrieve the backups directly to the ecosystem of selected cloud providers, namely Amazon S3 and IBM Softlayer Object Storage.

This feature is documented in the Informix Administrator's Guide.

Switch Primary and Mirror Chunks
  • With the release of Informix 12.10.xC10, you can toggle between primary and mirror chunks, making the original primary chunk the new mirror, and the original mirror chunk the new primary. It allows you to quickly migrate INFORMIX chunk from your current set of disks to a newer and faster set of disks, with no downtime.

This feature is documented in the Informix Administrator's Guide.

Onstat Enhancements
  • Onstat –g ses shows complete host and user names
Performance enhancement for blocking Oninit
  • Increases the speed of onmode -c block / unblock
Identify Unused Indexes
  • Provides the user with the last time an index was used

This feature is documented in the Informix Administrator's Reference Guide.

Easier Session Tracking
  • CSDK: Introduction of new CLIENT_LABEL environment variable which facilitates the clients to tag themselves for monitoring purposes
  • JDBC: Some RDBMSs, such as Oracle (dbms_session.set_client_identifier sets this Oracle client-ID), support flagging of a session with custom identifiers. This enhances the session tracking (which is also useful in OAT) ability

This feature is documented in the SQL Programming Guide.

Truncate Table
  • Allows the user to truncate a table but to keep low statistics(nrows), an extension to the truncate table command

This feature is documented in the SQL Programming Guide.

Smart Trigger session survival
With the release of JDBC 4.10.JC10, new smart trigger features are enabled:
  • Enhancement over xC9 feature to enhance the ability for client sessions to receive events after reconnecting to the server
  • Audit functionality support, an ability to get userid and session id who caused the event

This feature is documented in the Enterprise Replication Guide.

Compression enhancement for DBSpace and BLOBspace Blobs
With the release of Informix 12.10.xC10, the compression of binary large objects (BLOBs) allows you to mark a table “compressed” from its inception, even if the table contains no rows from which a dictionary could be built.

This feature is documented in the Informix Administrator's Guide.

TimeSeries compression on timeseries sub-type containing string fields

  • Ability to compress TimeSeries data in a row that contains strings, improving compression ratio for those rows

This feature is documented in the TimeSeries Data User's Guide.

Removed deprecated features

  • DataBlade Developers Kit (DBDK)
  • “onsnmp” subagent
  • Blade Manager

Separated server/client installs

GSKit Upgrade

  • Upgraded to version 8.0.50.88.

What's new in IBM Informix version 12.10.xC9

Smart Triggers for pushing data to clients

In a relational database environment, client applications are constantly monitoring and triggering other complex jobs, based on changes happening in the database. Applications need to be able to detect events in the database as they happen, without adding overhead to the database server.

With the release of Informix 12.10.xC9, clients can now create JDBC Smart Triggers to register changes in a dataset, using SELECT statements and WHERE clauses, and push data from the server to the client. Scaling is achieved by clients not having to poll for data, while the database server's parallel architecture can feed the data to all clients by asynchronously reading logical log file changes. This design lets client applications scale linearly without adding significant overhead to the database server or any OLTP applications making changes to the database.

Data returned to the client is in a developer-friendly JSON format.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Track moving objects

You can track a moving object, such as a vehicle, by capturing location information for the object at regular time intervals. You can use the new spatiotemporal search extension to index the data and then query on either time or on location to determine the relationship of one to the other. You can query when an object was at a specified location, or where an object was at a specified time. You can also find the trajectory of a moving object over a range of time.

The spatiotemporal search extension depends on the TimeSeries and spatial extensions. You store the spatiotemporal data in a TimeSeries data type with columns for longitude and latitude. You index and query the spatiotemporal data with the new spatiotemporal search functions. You can also query spatiotemporal data with time series and spatial routines.

A greater frequency for tracking moving or stationary objects is available. Time can now be entered as a string or a number.

This feature is documented in the IBM Informix Spatiotemporal Search for Moving Objects User's Guide.

What's new in IBM Informix version 12.10.xC8

Server changes that affect migration

Due to new features and functionality, Informix version 12.10.xC8 contains the following new configuration parameter and environment variables:

  • DISK_ENCRYPTION configuration parameter
  • USE_SHARDING session environment variable
  • IFX_LO_READONLY environment variable

These changes are summarized in the Migration Guide.

View and rerun DB-Access commands

You can now view DB-Access commands that you ran from the dbaccess command line in interactive non-menu mode during the current session by running the dbaccess command with the new -history option. Previous commands are listed and numbered. You can rerun a previous command with the run command followed by the number of the previous command.

This feature is documented in the IBM Informix DB-Access User's Guide.

JSON compatibility enhancements

JSON compatibility is enhanced in the following ways:

  • Run MongoDB or REST API queries with native cursors

    By default, the wire listener separates MongoDB and REST API queries into multiple queries that each return up to the number documents set by the response.documents.count.default property. You can now also enable a native cursor for a MongoDB or REST API query with the new $nativeCursor or &nativeCursor query modifier. A native cursor holds open a true cursor on the Informix database server. A native cursor requires more listener resources because connections and result set objects are tied to a single session, but the cursor guarantees consistent query results. For REST API queries, use the new killCursors command to close the cursor.

  • Cursor idle timeout

    You can control the cursor idle timeout with the new cursor.idle.timeout wire listener property.

  • Client connections to the wire listener are now timed out by default

    The new default value for the listener.idle.timeout parameter is 300000. This parameter sets the amount of time, in milliseconds, that a client connection to the wire listener can idle before it is forcibly closed. The value 0, which was the previous default value, indicates that client connections are never timed out.

  • Wire listener HTTP request headers have a size limit by default

    The new listener.http.headers.size.maximum parameter specifies the maximum size of incoming HTTP request headers. The default value is 8192 bytes.

These features are documented in the IBM Informix JSON Compatibility Guide.

Consistent sharded insert, update, and delete operations

When you run sharded operations that insert, update, or delete data, the transactions are now applied with the two-phase commit protocol instead of being eventually consistent. Data is moved to the appropriate shard server before the transaction is committed.

For sharding with Enterprise Replication commands, you must set the new USE_SHARDING session environment option to enable consistent sharded insert, update, and delete operations.

For sharding with MongoDB commands, you must set the sharding.enable wire listener configuration parameter to enable consistent sharded insert, update, and delete operations.

This feature is documented in the IBM Informix JSON Compatibility Guide, the IBM Informix Enterprise Replication Guide, and the IBM Informix Guide to SQL: Syntax.

List Enterprise Replication definition commands

You can print a list of commands that you ran to define replication server, replicates, replicate sets, templates, or grids with the new cdr list catalog command. You can use the list of commands to easily duplicate a system for troubleshooting or moving a test system into production.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Complex text search with regular expressions

You can now search for and replace text strings with regular expressions in SQL statements. Regular expressions combine literal characters and metacharacters to define the search and replace criteria. You run the functions from the new Informix Regex extension to find matches to strings, replace strings, and split strings into substrings. The regex functions support the POSIX 1003.2 regular expression and replacement metacharacters.

The JSON wire listener now supports searching with regular expressions with the $regex operator in MongoDB commands.

This feature is documented in the IBM Informix Database Extensions User's Guide and the IBM Informix JSON Compatibility Guide.

Rename system-generated indexes

You can rename a system-generated index that implements a primary-key or foreign-key constraint. When you rename a system-generated index, the index is not automatically dropped if you drop the corresponding constraint. You can also rename a system-generated index to make the name more meaningful.

This feature is documented in the IBM Informix Guide to SQL: Syntax

Temporary table space usage

You can now see the space usage for session-level temporary tables by running an SQL query on the sysptnhdr table in the sysmaster database.

This feature is documented in the IBM Informix Administrator’s Reference.

Suspend validation of check constraints

You can now temporarily disable check constraints with the NOVALIDATE keyword. When you create or enable a check constraint, you can speed up the statement by including the NOVALIDATE keyword to skip the checking of existing rows for violations. The check constraint is enabled when the statement completes.

This feature is documented in the IBM Informix Guide to SQL: Syntax

JDBC 4.0 compliance

The is now compliant with the Java™ Database Connectivity (JDBC) 4.0 specification.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Query smart large objects with read-only access

You can now specify that smart large objects are queried with read-only access in JDBC connections. By default, smart large objects are queried with read-write access. To set read-only access for smart large objects, include the IFX_LO_READONLY=1 environment variable in the JDBC connection string.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Encrypt storage spaces

You can now encrypt storage spaces. The data in encrypted storage spaces is unintelligible without the encryption key. Encrypting storage spaces is an effective way to protect sensitive information that is stored on media.

You enable storage space encryption by setting the new DISK_ENCRYPTION configuration parameter. Subsequently, storage spaces that you create are automatically encrypted by default. However, you can choose to create an unencrypted storage space with onspaces -c commands or SQL administration API commands. You can also encrypt or decrypt storage spaces during a restore with the ON-Bar or ontape utilities. You can see if storage spaces are encrypted with the onstat -d and oncheck -pr commands.

This feature is documented in the IBM Informix Security Guide, the IBM Informix Backup and Restore Guide, and the IBM Informix Administrator’s Reference.

Advanced analytics for time series data

You can now use advanced analytics functions to analyze time series data for patterns or abnormalities:

  • Quantify similarity, distance, and correlation between two sequences using the Lp-norm, Dynamic Time Warping, or Longest Common Subsequence method.
  • Search based on specific measures like similarity, distance, and correlation. Find the portions of a sequence which are related to a given pattern.
  • Detect anomalies within time series data. Given a long time series sequence, anomaly detection provides the ability to tell which part of the time series is dramatically different from the portion of data nearby in time order.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Enhancements to time series functions

Data type support in expressions in time series functions is enhanced in the following ways:

  • You can now include BSON field values in expressions in the CountIf and Apply functions. BSON fields are extracted as FLOAT values by default, but you can explicitly cast BSON fields to an INTEGER, BIGINT, or LVARCHAR data type.
  • You can now include INVERVAL values, DATETIME values, and Boolean values in expressions in the CountIf function.
  • You can now define a calendar to specify the window interval for partitions in a rolling window container. The TSContainerCreate function allows a calendar name from the CalendarTable table in the window_interval argument.

These features are documented in the IBM Informix TimeSeries Data User’s Guide.

Easier embedded deployment

You can now deploy Informix with the ifxdeploy utility as a non-root user on UNIX and Linux. The ifxdeploy utility has new options and does not use a configuration file. However, you can continue to use the deprecated syntax and configuration file.

This feature is documented in the IBM Informix Embeddability Guide.

What's new in IBM Informix version 12.10.xC7

COALESCE function

You can now evaluate a series of expressions to return the first non-null value by running the COALESCE function.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Update BSON arrays in an SQL statement

When you run the BSON_UPDATE function in an SQL statement, you can now include the MongoDB array update operators $addToSet, $pop, $pullAll, and $push, plus the array update operator modifiers $each, $position, $slice, and $sort.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

JSON compatibility enhancements

JSON compatibility is improved in the following ways:

  • Faster updates of JSON documents

    Updates that you make to JSON documents now run faster because they are run in the database server instead of in the wire listener. When you run a MongoDB update operation on a document in an Informix database, the database server internally runs the update operation with the BSON_UPDATE function. The wire listener now sends document updates to the database server first by default. Previously, the wire listener processed document updates by default. The default setting of the update.mode parameter in the wire listener configuration file is now mixed instead of client.

  • More options for SQL statements and transactions

    Bind host variables to SQL statements: You can now include host variables when you run SQL statements through the MongoDB API. The ability to run SQL statements must be enabled by the security.sql.passthrough=true wire listener configuration parameter.

    Securely run SQL functions and stored procedures: You can now run an SQL function or stored procedure even when the ability to run other SQL statements is disabled by the security.sql.passthrough=false wire listener configuration parameter. Include the Informix JSON command runFunction or runProcedure in a command document for the db.runCommand command or the GET REST API method.

    Run batch transactions: You can now run multiple command documents as a single transaction. Include command documents with the execute parameter of the Informix JSON transaction command.

  • MongoDB compatibility

    MongoDB shell version 3.2 support: You can now run commands with version 3.2 of the MongoDB shell.

    GridFS support: You can now use the GridFS functionality that is available in the MongoDB drivers for storing and retrieving BSON documents that are larger than 16 MB.

  • Aggregate or slice time series data

    You can now use the MongoDB aggregation framework pipeline to aggregate time series data or return the time series element in a time range.

These features are documented in the IBM Informix JSON Compatibility Guide.

Load JSON documents with the MQTT protocol

You can now load JSON documents into Informix databases with the MQTT protocol by defining an MQTT wire listener. The MQTT protocol is a light-weight messaging protocol that you can use to load data from devices or sensors. For example, you can use the MQTT protocol to publish data from sensors into a time series that contains a BSON column. You can also use the MQTT protocol to publish data to relational tables or JSON collections.

You configure an MQTT wire listener by setting the listener.type=mqtt parameter in the wire listener configuration file. From an MQTT client, you send PUBLISH packets to insert data. You cannot, however, query data from MQTT clients with SUBSCRIBE packets. You can authenticate MQTT client users through the wire listener.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Quickly add or remove shard servers with consistent hashing

You can quickly add or remove a shard server by using the new consistent hashing distribution strategy to shard your data. With consistent hash-based sharding, the data is automatically distributed between shard servers in a way that minimizes the data movement when you add or remove shard servers. The original hashing algorithm redistributes all the data when you add or remove a shard server. You can specify the consistent hashing strategy when you run the cdr define shardCollection command.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Improved spatiotemporal searching

Spatiotemporal searching is improved in the following ways:

  • You can now track stationary objects and objects that temporarily do not have a signal.
  • You can speed the indexing of new data by running multiple Scheduler tasks in parallel. Alternatively, you can configure time series loader functions to trigger spatiotemporal indexing as time series data is saved to disk.
  • You can now configure storage spaces and extents size for spatiotemporal data and each of the spatiotemporal indexes.
  • You no longer have any restrictions on the configuration of the time series table or the number of rows. You can index spatiotemporal data in a tenant database. You location data can be in a BSON column within the TimeSeries subtype. Your time series data can be compressed or have a hertz frequency.
  • Distance measurements now use a spherical calculation that is based on longitude and latitude coordinates instead of a Euclidean calculation that is based on the Cartesian system.

If you have spatiotemporal data and indexes from a previous release of Informix, you must drop your existing spatiotemporal tables and indexes and re-create them. All spatiotemporal functions are renamed, however, applications that use the previous function names still run.

This feature is documented in the IBM Informix Spatiotemporal Search for Moving Objects User's Guide .

Trigger spatiotemporal indexing while loading time series data

If you have a spatiotemporal subtrack table created for a time series, you can trigger the indexing of spatiotemporal data when you load time series data. Set the writeflag parameter to 512 or 1024 in the TSL_Commit, TSL_FlushAll, or TSL_Flush function to trigger spatiotemporal indexing as time series data is saved to disk.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Improved pattern match searching

You can now create a pattern match index or run a pattern match search on a field in a BSON document. The BSON document must be in a BSON column the TimeSeries subtype and the field must hold numeric data. Run the TSCreatePatternIndex function or the TSPatternMatch function and specify the BSON column and field name.

You can now extend the time range of an existing pattern match index to incrementally update the index. You can extend the index time range in either direction, or both directions, but the existing and new time ranges must overlap. Run the TSCreatePatternIndex function with new beginning or ending times.

These features are documented in the IBM Informix TimeSeries Data User’s Guide.

Limit future data for rolling window containers

You can limit the number of partitions for future dates in rolling window containers. Limiting partitions for future data protects the partition for current data from being destroyed. When you create rolling window containers with the TSContainerCreate function, you can now specify the maximum number of future partitions. You can limit future partitions for existing rolling window containers with the TSContainerManage function.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Enhancements for hertz data

You can now enter whole-second blocks of hertz records into a time series out of chronological order. For example, if a time series is missing data for an entire second in the past, you can enter the data. However, you must enter subsecond elements within a second in chronological order.

You can now store hertz data in rolling window containers.

These features are documented in the IBM Informix TimeSeries Data User’s Guide.

Longer fragmentation schemes for time series virtual tables

The fragmentation scheme for virtual tables is adapted from the fragmentation scheme of the base table. In some cases, the resulting virtual table creation statement is significantly longer than the original table creation statement. If the virtual table creation statement fails because it exceeds the maximum length of an SQL statement, you can specify one of the two new fragmentation flags in the TSVTMode parameter of the TSCreateVirtualTab procedure. If the fragmentation clause for the base table has expressions that each map to a single fragment and does not have a REMAINDER clause, use the fragment_verbatim flag. If the fragmentation clause for the base table has ambiguous expressions or a REMAINDER clause, use the fragment_with_case flag.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Convert spatial data to GeoJSON format

You can now display spatial data in JSON-based applications by converting a geometry to a BSON document in GeoJSON format. Run the SE_AsBSON function on a geometry to return a BSON document.

This feature is documented in the IBM Informix Spatial Data User's Guide.

What's new in IBM Informix version 12.10.xC6

Support for additional platforms

  • Informix is now available on IBM POWER8® for ppc64le with Red Hat Enterprise Linux 7.1, SUSE Linux Enterprise Server 12, and Ubuntu 14.04 LTS.
  • The spatial data feature is now available on the following platforms:
    • IBM Power Series® 64-bit with Red Hat Enterprise Linux ES releases 5.3, 6, and 7, and SUSE SLES 11
    • IBM zSeries 64-bit with Red Hat Enterprise Linux ES releases 5.3 and 6, and SUSE SLES 11

Server changes that affect migration

Due to new features and functionality, Informix version 12.10.xC6 contains the following new configuration parameters and environment variable:

  • BAR_MAX_RESTORE configuration parameter
  • IFXGUARD configuration parameter
  • SHARD_ID configuration parameter
  • SHARD_MEM configuration parameter
  • SMX_NUMPIPES configuration parameter
  • TENANT_LIMIT_CONNECTIONS configuration parameter
  • TENANT_LIMIT_MEMORY configuration parameter
  • IFX_SOC_KEEPALIVE environment variable

These changes are summarized in the Migration Guide.

Limit shared memory and connections for tenant databases

You can limit shared memory and the number of connections for tenant databases in a multitenancy environment. You can use configuration parameters to set limits for all tenants or parameters to the tenant create or tenant update argument to the task or admin SQL administration API command:

  • Limit the amount of shared memory for all sessions that are connected to the tenant database. When the limit is exceeded, the session that is using the most shared memory is terminated. Set the TENANT_LIMIT_MEMORY configuration parameter or include the tenant_limit_memory parameter.
  • Limit the number of client connections to a tenant database. When the limit is reached, subsequent connection requests to the tenant database are rejected. Set the TENANT_LIMIT_CONNECTIONS configuration parameter or include the tenant_limit_connections parameter.

This feature is documented in the IBM Informix Administrator’s Reference.

Restore tenant databases to a point in time

You can now restore a tenant database to a particular point in time. Run the onbar -r command with the new -T option to specify the tenant database and the -t option to specify the point in time.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Backup and Restore Guide.

Control restore resources

You can set the number of parallel processes to run during a restore independently from the number of processes for a backup with the new BAR_MAX_RESTORE configuration parameter. Previously, the BAR_MAX_BACKUP configuration parameter controlled the number of processes for both backups and restores.

This feature is documented in the IBM Informix Backup and Restore Guide.

Parallel sharded queries

You can now run SELECT statements in sharded queries in parallel instead of serially on each shard. Parallel sharded queries return results faster, but also have the following benefits:

  • Reduced memory consumption: Table consistency is enforced on the shard servers, which eliminates the processing of data dictionary information among the shard servers.
  • Reduced network traffic: Client connections are multiplexed over a common pipe instead of being created individual connections between each client and every shard server. Client connections are authenticated on only one shard server instead of on every shard server. Network traffic to check table consistency is eliminated.

To enable parallel sharded queries, set the new SHARD_ID configuration parameter in the onconfig file to a unique value on each shard server in the shard cluster. Also set the new sharding.parallel.query.enable=true and sharding.enable=true parameters in the wire listener configuration file for each shard server. You can customize how shared memory is allocated for parallel sharded queries on each shard server by setting the new SHARD_MEM configuration parameter. You can reduce latency between shard servers by increasing the number of pipes for SMX connections with the new SMX_NUMPIPES configuration parameter.

If you plan to upgrade your existing shard cluster from a previous version of Informix 12.10, upgrade and set the SHARD_ID configuration parameter on all the shard servers to enable parallel sharded queries.

This feature is documented in the IBM Informix JSON Compatibility Guide and the IBM Informix Enterprise Replication Guide.

MongoDB 2.6 and 3.0 compatibility

Informix now supports the following MongoDB commands:

  • The following database management commands:
    • The query and write operation commands insert, update, and delete.
    • The instance administration commands createIndexes, listCollections, and listIndexes.
    • The user management commands, for MongoDB 2.6 and later, createUser, updateUser, dropUser, dropAllUserFromDatabase, grantRolesToUser, revokeRolesFromUser, and usersInfo.
    • The role management commands: createRole, updateRole, dropRole, dropAllRolesFromDatabase, grantPrivilegesToRole, revokePrivilegesFromRole, grantRolesToRole, revokeRolesFromRole, and rolesInfo.
  • The query and projection command $eq.
  • The field update operators $mul, $min, $max, and $currentDate.
  • The pipeline aggregation operator $out.

You can authenticate MongoDB clients with the MongoDB 3.0 SCRAM-SHA-1 authentication method. You must upgrade the user schema for existing users.

You upgrade to MongoDB 3.0 by setting the new mongo.api.version parameter to 3.0 in the wire listener configuration file.

These features are documented in the IBM Informix JSON Compatibility Guide.

Wire listener enhancements

The wire listener has the following new parameters that you can set to customize the wire listener.

MongoDB compatibility
Specify the version of MongoDB API compatibility with the mongo.api.version parameter.
Security
Disable commands with the command.blacklist parameter.
Specify the authentication type with the db.authentication parameter.
Specify an IP address as the administrative host with the listener.admin.ipAddress parameter.
Set authentication timeout with the listener.authentication.timeout parameter.
Add information to HTTP headers with the listener.http.headers parameter.
Resource management
Configure a memory monitor to reduce resource usage with the listener.memoryMonitor parameters.
Create a separate thread pool for administrative connections with the listener.pool.admin.enable parameter.
Specify the timeout periods for socket connections with the listener.socket.accept.timeout and listener.socket.read.timeout parameters.
Suppress pooled connection checking with the pool.lenient.return.enable and the pool.lenient.dispose.enable parameters.
Specify the number of maintenance threads for connection pools with the pool.service.threads parameter.

These features are documented in the IBM Informix JSON Compatibility Guide.

Authenticate wire listener connections with Informix

You can configure the database server to authenticate MongoDB client users, who connect through the wire listener, with a pluggable authentication module (PAM). Because you administer user accounts through the database server, you can audit user activities and configure fine-grained access control. In contrast, if you use MongoDB authentication, MongoDB clients connect to the database server as the wire listener user that is specified by the url parameter.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Starting the wire listener for the REST API

You no longer need to provide the path to tomcat when you start the wire listener for the REST API.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Faster communication between high-availability servers

You can now reduce latency between high-availability servers by increasing the number of pipes that are used for the server multiplexer group (SMX) connections between servers. Set the new SMX_NUMPIPES configuration parameter to the number of pipes to use.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Faster index transfer to secondary servers

When the LOG_INDEX_BUILD configuration parameter is enabled, the transfer of newly-created detached indexes to HDR or remote stand-alone secondary servers use light scans when possible, which leads to faster transfer rates.

Easier cloning of database servers

When you clone a replication or high-availability server with the ifxclone utility, you can include the new --createchunkfile option to automatically create the cooked chunks and mirror chunks on the target server that exist on the source server.

This feature is documented in the IBM Informix Administrator’s Reference.

Reoptimize JDBC queries

When you run queries through the Informix JDBC Driver, you can now reoptimize previously prepared query plans. Set the new withReoptimization argument of the IfmxPreparedStatement.executeQuery method to true.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Keep JDBC socket connections open

You can now keep long-running JDBC socket connections from timing out due to inactivity. Set the IFX_SOC_KEEPALIVE environment variable to true when you open a socket connection with the Informix JDBC Driver.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Avoid caching SQL statements with unpredictable query plans

Some SQL statements produce significantly different query plans depending on the values of the placeholders that are passed to the database server when the statements are run. Using a cached query plan for such a statement might result in poor performance. You can now avoid caching an SQL statement whose query plan is unpredictable by including the AVOID_STMT_CACHE optimizer directive.

This feature is documented in the IBM Informix Guide to SQL: Syntax

Prioritize databases for automatic update statistics

You can now assign a priority to each of your databases in the Auto Update Statistics (AUS) maintenance system. By default all databases have a medium priority. You can assign specific databases a high or a low priority to ensure that statistics for your most important databases are updated first. Statistics for low priority databases are updated after high and medium priority databases, if time and resources permit. For example, if you have a system with a production and a test database, you can assign the production database a high priority and the test database a low priority. You can also disable AUS for a database.

You can set AUS priorities in the IBM OpenAdmin Tool (OAT) for Informix or by adding rows to the ph_threshold table in the sysadmin database.

This feature is documented in the IBM Informix Performance Guide.

Enhanced auditing of Informix databases with IBM Security Guardium

You now have increased capabilities when you audit the user actions for your Informix database server with IBM Security Guardium®, version 10.0. IBM Security Guardium can now mask sensitive data. IBM Security Guardium can now audit, and if necessary, close, any Informix connection, regardless of the connection protocol. Previously, IBM Security Guardium audited and closed only TCP connections.

After you set up the Guardium server, you start the ifxguard utility to monitor connections to your Informix databases. You can customize the behavior of the ifxguard utility by editing the ifxguard configuration file and by setting the IFXGUARD configuration parameter in the onconfig file.

This feature is documented in the IBM Informix Security Guide.

Show time series reference count

You can now see the time series reference count in the metadata section of the output of the TSInfo function. The time series reference count is the number rows in time series tables that reference the same time series data in a container.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Default dbspace for time series containers

You can now specify NULL instead of a dbspace name when you create containers with the TSCreateContainer procedure. The container is created in the same dbspace as the time series table, or, if the table is in a tenant database, the dbspace for the tenant database catalogs.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Enhanced monitoring of IBM Informix Warehouse Accelerator queries

When you run the ondwa tasks command to monitor Informix Warehouse Accelerator queries, the command now displays client session information for running queries and basic information about queued queries. The client session information includes the session ID and the server number to which the client is connected.

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide.

What's new in IBM Informix version 12.10.xC5

Server changes

Due to new features and functionality, Informix 12.10.xC5 contains the following changed and new configuration parameters and environment option:

  • Changed configuration parameter: AUTO_REPREPARE
  • Changed configuration parameter: TAPESIZE
  • Changed configuration parameter: LTAPESIZE
  • Changed session environment option: IFX_AUTO_REPREPARE
  • New configuration parameter: TENANT_LIMIT_SPACE
  • New configuration parameter: SESSION_LIMIT_MEMORY
  • New configuration parameter: SESSION_LIMIT_TEMPSPACE
  • New configuration parameter: SESSION_LIMIT_LOGSPACE
  • New configuration parameter: SESSION_LIMIT_TXN_TIME

These changes are summarized in the Migration Guide.

Rolling upgrades for high-availability clusters

You can upgrade a high-availability cluster to the next fix pack or interim update (PID) with minimal interruption to client applications. During the rolling upgrade process, the cluster remains online even though the servers in the cluster are running on different levels of the software.

For example, to upgrade from 12.10.xC4 to 12.10.xC5: Stop a secondary server in the cluster, install the new fix pack on it, and then start the upgraded server. After you upgrade all of the secondary servers, stop the primary server and promote one of the secondary servers to the primary server. Then, you can upgrade the original primary server, start it as a secondary server, and then promote it back to primary server.

The rolling upgrade process, as well as prerequisites and restrictions, are documented in the IBM Informix Migration Guide.

Support for Java 7

IBM Informix 12.10.xC5 software supports Java Platform Standard Edition (Java SE), Version 7.

Informix installation applications install IBM Runtime Environment, Java Technology Edition, Version 7 on most platforms by default. That version is used to run Java user-defined routines that are created in the server.

Check the machine notes for your operating system platform to determine whether the installation application requires a preinstalled JRE.

For the list of supported Java development software, as well as prerequisites and restrictions, go to Java technology dependencies.

Improved installation logging and debugging

The default name and location of the installation log file for both the database server and client products is now /tmp/iad_act.log (UNIX, Linux, Mac OS X) or \tmp\iad_act.log (Windows). You can specify a different name and location for the installation log file with the -DLOG_FILE option in the installation command.

The -DDEBUG option is deprecated. Now you have more control over the debugging and tracing information for the installation process. By default, tracing is disabled. You can set the tracing level 1 - 9 with the -DDEBUG_LEVEL option in the installation command. If tracing is enabled, the default name and location of the debugging file is /tmp/iad_dbg.log (UNIX, Linux, Mac OS X) or \tmp\iad_dbg.log (Windows). You can set the name and location of the debug file with the -DDEBUG_FILE option.

These features are documented in the IBM Informix Installation Guide and the IBM Informix Client Products Installation Guide.

Easier silent installations

You can streamline a silent installation on systems with existing installations of the database server or client products. Include the -DOVERWRITE_PRODUCT=TRUE option with the ids_install, installclientsdk, or installconnect command to overwrite an existing installation. Otherwise, the installation application exits if it finds an existing product installation.

This feature is documented in the IBM Informix Installation Guide and the IBM Informix Client Products Installation Guide.

Deprecated options for installation commands

The following options to the ids_install, installclientsdk, and installconnect commands are deprecated:

  • UNIX, Linux: The -i swing installation command option is deprecated. Use the -i gui option in the installation command to run the installation application in GUI mode.
  • The -DDEBUG option is deprecated. Use the new -DDEBUG_LEVEL option in the installation command to set the tracing level.

Control tenant resources

You can further control the resources that are available for each tenant database to improve performance and restrict the tenant database size. You can include new optional properties in the tenant definition when you run the admin() or task() SQL administration command with the tenant create or tenant update arguments. Tenant properties take precedence over related configuration parameters.

  • You can specify thesession_limit_memory property to end sessions that exceed a specified amount of shared memory, or the session_limit_tempspace property to end those that exceed temporary storage space.
  • You can specify the session_limit_logspace property to roll back transactions that are too large, or the session_limit_txn_time property to end those that take too long.
  • You can limit the total amount of permanent storage space for a tenant database by setting the tenant_limit_space property or the TENANT_LIMIT_SPACE configuration parameter.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Limit access to tenant databases in OAT

An Informix administrator can assign a tenant database to a tenant owner in the IBM OpenAdmin Tool (OAT) for Informix. A tenant owner can access only the assigned database.

Limit session resources

You can limit resources for all sessions that are owned by non-administrative users to prevent performance issues. Limiting session resources prevents any session from using so many resources that other sessions cannot continue processing transactions. Limiting session resources can be useful in embedded environments.

You can specify to end sessions that exceed a specified amount of shared memory or temporary storage space:
  • Set the SESSION_LIMIT_MEMORY configuration parameter to the maximum amount of shared memory that can be allocated for a session.
  • Set the SESSION_LIMIT_TEMPSPACE configuration parameter to the maximum amount of temporary storage space that can be allocated for a session.
You can specify to roll back transactions that are too large or take too long:
  • Set the SESSION_LIMIT_LOGSPACE configuration parameter to the maximum amount of log space that a transaction can fill.
  • Set the SESSION_LIMIT_TXN_TIME configuration parameter to the maximum number of seconds that a transaction can run.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Larger maximum tape size for backups

The maximum value of the TAPEDEV and LTAPEDEV configuration parameters is now 9223372036854775807 KB, which is equivalent to 9 ZB.

This feature is part of storage optimization and is documented in the IBM Informix Backup and Restore Guide.

New locale for the JDBC Driver

The Informix JDBC Driver now supports the Estonian and Lithuanian locale, et_ee.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Correlated aggregate expressions

In a subquery, an aggregate expression with a column operand that was declared in a parent query block is called a correlated aggregate. The column operand is called a correlated column reference. When a subquery contains a correlated aggregate with a correlated column reference, the database server now evaluates that aggregate in the parent query block where the correlated column reference was declared. If the aggregate contains multiple correlated column references, the aggregate is processed in the parent query block (where the correlated column reference originated) that is the nearest parent to the subquery.

This feature is documented in the IBM Informix Guide to SQL: Syntax

Control repreparation

You can improve the speed of queries by controlling when queries are automatically reprepared. The AUTO_REPREPARE configuration parameter and the IFX_AUTO_REPREPARE session environment option support these additional values:

  • 3 = Enables automatic repreparation in optimistic mode. If a statement ran correctly less than one second ago, do not reprepare the statement.
  • 5 = Enables automatic repreparation after UPDATE STATISTICS is run. If a statement includes a table on which UPDATE STATISTICS was run, reprepare the statement.
  • 7 = Enables automatic repreparation in optimistic mode and after UPDATE STATISTICS is run.

This feature is documented in the IBM Informix Administrator’s Reference and IBM Informix Guide to SQL: Syntax.

Additional platforms for basic text searching

You can now use the basic text search feature on the following platforms:

  • IBM Power Series 64-bit with Red Hat Enterprise Linux ES releases 5.3, 6, and 7, and SUSE SLES 11
  • IBM zSeries 64-bit with Red Hat Enterprise Linux ES releases 5.3 and 6, and SUSE SLES 11

Manipulate JSON and BSON data with SQL statements

You can use SQL statements to manipulate BSON data. You can create BSON columns with the SQL CREATE TABLE statement. You can manipulate BSON data in a collection that was created by a MongoDB API command. You can use the CREATE INDEX statement to create an index on a field in a BSON column. You can insert data with SQL statements or Informix utilities. You can view BSON data by casting the data to JSON format or running the new BSON value functions to convert BSON field values into standard SQL data types, such as INTEGER and LVARCHAR. You can use the new BSON_GET and BSON_UPDATE functions to operate on field-value pairs.

This feature is documented in the IBM Informix Guide to SQL: Syntax

High availability for MongoDB and REST clients

You can provide high availability to MongoDB and REST clients by running a wire listener on each server in your Informix high-availability cluster.

You can also provide high availability between the wire listener and the Informix database server. Connect the wire listener to the database server through the Connection Manager or specify an sqlhosts file in the url parameter in the wire listener properties file.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Wire listener configuration enhancements

You can set these new or updated parameters in the wire listener properties file:

  • url parameter can include JDBC environment variables.
  • New: listener.hostName parameter can specify the listener host name to control the network adapter or interface to which the wire listener connects.
  • New: collection.informix.options parameter can specify table options to automatically add shadow columns or enable auditing when you create a JSON collection.
  • New: command.listDatabases.sizeStrategy parameter can specify a strategy for computing the size of a database when you run the MongoDB listDatabases command.
  • New: fragment.count parameter can specify the number of fragments to create for a collection.
  • New: jdbc.afterNewConnectionCreation parameter can specify SQL statements, for example, SET ENVIRONMENT statements, to run after connecting to the database server.

These features are documented in the IBM Informix JSON Compatibility Guide.

Wire listener query support

The wire listener now supports these types of queries:

  • Join queries on JSON data, relational data, or both JSON and relational data.
  • Array queries on JSON data with the MongoDB $elemMatch query operator.

These features are documented in the IBM Informix JSON Compatibility Guide.

Enhanced account management through the wire listener

You can control user authorization to Informix databases through the wire listener by locking and unlocking user accounts or individual databases with the new Informix JSON lockAccount and unlockAccounts commands.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Load pure JSON documents into time series

You can now load data that is entirely in JSON documents directly into time series. Previously, you had to provide primary key values and time stamps in plain text format. Run the new TSL_PutJson() function to load pure JSON documents. For example, you can load the JSON documents that are generated by wireless sensor devices without preprocessing the data.

You can load JSON documents from a file or from a named pipe.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster loading of time series data files

You can now quickly load files directly into the database by specifying a file path as the second argument to the TSL_Put function. Previously, the TSL_Put function accepted data as only LVARCHAR or CLOB data types, which require intermediate steps to process the data.

The time series data that you load with the TSL_Put function can now contain JSON or BSON documents as values for columns other than the primary key and time stamp columns. Unlike the TSL_PutSQL function, when you load JSON columns with the TSL_Put function, you do not need to create an external from which to load the data.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Improved logging for the time series loader

If you write a loader program to load time series data, you can choose to retrieve loader messages from a queue instead of logging the messages in a message log file. Retrieving messages from a queue results in less locking contention than logging messages in a file.

You can retrieve queued messages as formatted message text in English by running the new TSL_GetFmtMessage function.

Alternatively, you can run the TSL_GetLogMessage function to return message numbers and then run the new TSL_MessageSet function to return the corresponding message text. This method is useful if you want to provide your own message text or if you want to retrieve message text on the client.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Create new time series while loading data

You can now create a new time series instance while loading data with a time series loader program. Previously, you had to insert primary key values and create time series instances before you loaded data with a loader program.

For a loader program, you can specify the definition of a time series instance by running the new TSL_SetNewTS function. You can specify whether the time series definition applies to the current loader session or to all loader sessions. When you load data with a TSL_Put function for a new primary key value, a new row is added to the table and a new time series instance is created based on the definition.

For a virtual table, you can create a new time series instance while quickly inserting elements into containers. In the TSCreateVirtualTab procedure, set the NewTimeSeries parameter and the elem_insert flag of the TSVTMode parameter.

You can automatically set the origin of any new time series instance to the day that the time series is created by including formatting directives for the year, month, and day. You can include formatting directives for the origin in the time series input string in an INSERT statement or in the NewTimeSeries parameter in the TSL_SetNewTS function and the TSCreateVirtualTab procedure.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Display time series storage space usage

You can find the amount of storage space that is used by a time series by running the new TSInfo function. You can customize the level of detail of the information. For example, you can display details about element pages, such as the number of pages, the number of bytes, the amount of free space, and the number of null pages. You can also return information about other properties of a time series, such as the origin, the type of values, and containers.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

View active time series loader sessions

When you run a time series loader program, you open a loader session for each table and TimeSeries column combination into which you load data. You can view a list of handles for active loader sessions by running the TSL_ActiveHandles function. The handle consists of the table name and the TimeSeries column name.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Analyze time series data for matches to patterns

You can search time series data for matches to a specific pattern of values. For example, if you identify a sequence of four values that indicate a problem, you can search of other sequences of four values that are similar to the original sequence of values. You run the TSPatternMatch function to find pattern matches. You specify the margin of error and whether to search through consecutive sequences of values or through every possible subsequence of values.

You can create a pattern matching index to improve query performance by running the TSCreatePatternIndex function.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Clip selected columns of time series data

You can extract data between two timepoints in a time series and return a new time series that contains only the specified columns of the original time series. Run the new ProjectedClip function to clip time series data from only the columns of the TimeSeries data type that you specify.

The data that you load into your time series might be configured to store a null value when a value does not differ from the previous value. If you have a low frequency of nonnull values, you can replace null values with the previous nonnull values in the output time series:

  • Replace only the first value for a column, if that value is null. Append (lf) to the column name in the column list to designate a low frequency column.
  • Replace all null values with the corresponding previous nonnull values. Append (nn) to the column name in the column list to designate a column with no null return values.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Track moving objects

You can track a moving object, such as a vehicle, by capturing location information for the object at regular time intervals. You can use the new spatiotemporal search extension to index the data and then query on either time or on location to determine the relationship of one to the other. You can query when an object was at a specified location, or where an object was at a specified time. You can also find the trajectory of a moving object over a range of time.

The spatiotemporal search extension depends on the TimeSeries and spatial extensions. You store the spatiotemporal data in a TimeSeries data type with columns for longitude and latitude. You index and query the spatiotemporal data with the new spatiotemporal search functions. You can also query spatiotemporal data with time series and spatial routines.

A greater frequency for tracking moving or stationary objects is available. Time can now be entered as a string or a number.

This feature is documented in the IBM Informix Spatiotemporal Search for Moving Objects User's Guide.

Enhancements to Informix Warehouse Accelerator

Informix Warehouse Accelerator has the following enhancements:

  • You can now accelerate queries that include these scalar functions: CURRENT, SQRT, and SYSDATE.
  • You can now view information about existing data marts in the accelerators by running the ondwa listmarts command.
  • You can now load data marts faster by adding a second DWAVP virtual processor. The DWAVP virtual processor runs Informix Warehouse Accelerator administrative functions and procedures. A DWAVP virtual processor is automatically allocated when the first Informix Warehouse Accelerator related activity occurs. In systems with significant data mart administration activity, you can define a second DWAVP virtual processor to speed the loading of data marts.

These features are documented in the IBM Informix Warehouse Accelerator Administration Guide.

What's new in IBM Informix version 12.10.xC4

Server changes

Due to new features and functionality, Informix 12.10.xC4 contains the following server changes:

  • New configuration parameters: CDR_MEM, SESSION_LIMIT_LOCKS
  • Changed configuration parameter: VP_MEMORY_CACHE_KB
  • New environment variable: IFX_PUA_DISPLAY_MAPPING
  • New session environment option: IFX_SESSION_LIMIT_LOCKS
  • New SQL reserved word: IFX_SESSION_LIMIT_LOCKS
  • New system database table: tenant
  • Changed sysindices system catalog table: Contains new columns, indexattr and jparam

These changes are summarized in the Migration Guide.

JDBC client connection requirements

As of IBM Informix 12.10.xC4 the INFORMIXSERVER environment variable is optional for JDBC client connections.

This change is summarized in the IBM Informix JDBC Driver Programmer's Guide.

Enterprise Replication conversion and reversion requirements

Due to new functionality, if you use Enterprise Replication you must run conversion (concdr.extension ) and reversion (revcdr.extension) scripts to migrate between Informix 12.10 fix packs.

This option is documented in the IBM Informix Migration Guide.

JSON compatibility conversion and reversion requirements

Support for JSON in databases was introduced in the 12.10.xC2 release, and databases created in 12.10.xC1 or earlier releases are not JSON compatible.

Before you migrate to 12.10.xC4: No special preparation is necessary. The only exception is if you are migrating from 12.10.xC2 and you have binary JSON (BSON) columns with DATE fields. In that case you must unload the data by using an external table so that you can load the data into a new database table after migration.

After you migrate to 12.10.xC4: All databases are converted to support JSON. If you migrated from 12.10.xC2 or 12.10.xC3, you might have to complete some post-migration steps depending on what JSON features you had used.

If you revert to 12.10.xC2 or 12.10.xC3: Any databases that did not support JSON before conversion do not support JSON after reversion. However, you can run a script to make such databases JSON compatible.

This information is documented in the IBM Informix Migration Guide.

Easier to import tables with large rows

You can allocate space efficiently when you import tables that have large rows by setting a default extent size of 16 KB. Include the new -D option of the dbimport utility to specify a default extent size of 16 KB. Extent sizes that you specify in the CREATE TABLE statement override the -D option. The -D option is useful especially when you import huge tables that contain large LVARCHAR columns.

This option is documented in the IBM Informix Migration Guide.

Easier installation of 32-bit programs on Windows 64-bit operating systems

When you install Informix products from 32-bit installation media on 64-bit Windows computers, you no longer need to add the SysWOW compatibility folder to the PATH environment variable.

Uninstall Informix programs on Windows from the Control Panel

Windows users with administrator privileges can uninstall Informix products by using the Control Panel. Previous methods of modification and uninstallation are still available.

This feature is documented in the IBM Informix Installation Guide.

Multitenancy in Informix

You can now deploy an Informix server that supports multiple tenants. A tenant is a set of users in a client organization that needs to access the same data and system resources. You create a dedicated tenant database, and assign storage and processing resources for that database based on the service-level agreements with the client organization. For example, you can provide services to multiple companies that run efficiently in a single Informix instance.

You create a tenant database by running the admin() or task() SQL administration command with the new tenant create argument. You can modify some properties of tenant databases with the new tenant update argument. You can view information about tenant databases on the Tenant page in the IBM OpenAdmin Tool (OAT) for Informix or in the tenant table in the sysadmin database.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Enhancements to OAT

You can now perform the following tasks in the IBM OpenAdmin Tool (OAT) for Informix:

  • You can remove outstanding in-place alter operations to optimize performance of DDL statements and reduce space usage. Go to Space Administration > Storage > Tables and Indexes > Server Optimization Policies.
  • After you configure automatic backups for a database server, you can propagate the backup configuration across the all the servers of a grid, or reset the backup configuration for an entire grid. Set up automatic backups in the Backup Wizard on Space Administration > Backup, and adjust automatic backups on Space Administration > Backup > Configuration.
  • You can monitor the tenant databases of a multitenant Informix instance. Tenant information is onSpace Administration > Tenants.

Easier removal of outstanding in-place alter operations

Removing outstanding in-place alter operations improves performance and is a prerequisite for reverting to an earlier version of Informix. You can easily remove outstanding in-place alter operations for tables or fragments in IBM OpenAdmin Tool (OAT) for Informix or with the new table update_ipa or fragment update_ipa argument of the admin() or task() SQL administration command. Previously, you ran a dummy UPDATE statement to remove outstanding in-place alter operations.

You can remove outstanding in-place alter operations faster by including the parallel option with the table update_ipa or fragment update_ipa argument of the admin() or task() SQL administration command.

This feature is documented in the IBM Informix Migration Guide and the IBM Informix Administrator’s Reference.

Limit the size of extendable storage spaces

You can prevent an extendable storage space from growing indefinitely by setting a maximum size for the space. Run the admin() or task() SQL administration command with the modify space sp_sizes argument and supply a value as the max_size argument, in KB. If you omit the max_size argument, or if you set it to 0, the size of the storage space can grow indefinitely. Limiting the size of storage spaces is useful especially in a multitenancy environment because you can use storage provisioning to automatically expand the spaces that are used by a tenant, but limit the space according to the service level agreement with the tenant.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Limit the number of locks for a session

You can prevent users from acquiring too many locks by limiting the number of locks for each user without administrative privileges for a session. Set the SESSION_LIMIT_LOCKS configuration parameter or the IFX_SESSION_LIMIT_LOCKS option to the SET ENVIRONMENT statement.

This feature is documented in the IBM Informix Administrator’s Reference and the IBM Informix Guide to SQL: Syntax.

New default mode for the VP_MEMORY_CACHE_KB configuration parameter

The default mode for the VP_MEMORY_CACHE_KB configuration parameter is now STATIC. The size of private memory caches for CPU virtual processors are limited to the size that you specify in the value of the VP_MEMORY_CACHE_KB configuration parameter. Previously, the default mode was DYNAMIC, which allows the size of private memory caches to increase and decrease automatically, as needed.

This parameter is documented in the IBM Informix Administrator’s Reference.

Enhancements to the Enterprise Replication apply process and memory pool allocation

You can now specify two new methods of memory pool allocation for Enterprise Replication. Set the new CDR_MEM configuration parameter to specify that Enterprise Replication allocates memory pools for CPU virtual processors or to use a fixed-block memory pool allocation strategy.

Transaction apply performance for large-scale grid environments is faster.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicate hertz and compressed time series data

You can now replicate hertz and compressed time series data with Enterprise Replication.

Replicating time series data is documented in the IBM Informix Enterprise Replication Guide.

New event alarm for blocked replication transactions

The new event alarm 33003 appears if Enterprise Replication transactions are being blocked because a table is in alter mode.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Faster storage optimization

You can now compress, uncompress, and repack data or indexes faster by including the new parallel option with the table, fragment, or index argument of the admin() or task() SQL administration command.

You can see new information about storage optimization operations with onstat commands. When you include the parallel option, the onstat -g ath command identifies storage optimization threads that are running in parallel. The onstat -g dsk command now shows the number of rows that remain in the operation, whether the repack operation is on the first or second pass of reading the rows, and the number of simple large objects on which operations were run.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Faster queries with UNION ALL views

Complex queries generated by business intelligence products now can use view folding in contexts formerly requiring that the views be materialized as large temporary tables. Queries run faster if they process multiple UNION ALL views and contain no UNION operations in the main query.

Customize the display widths of Unicode private-use characters

Starting in Informix GLS 6.00.xC4, you can specify the display widths that DB-Access and other character-based Informix applications use for characters in the Unicode Private Use Area (PUA) ranges. Before you try to display the characters that are in PUA ranges, set the new IFX_PUA_DISPLAY_MAPPING environment variable, and create a mapping file: $INFORMIXDIR/gls/etc/pua.map. In the file, list each character followed by the character representation display width. Valid display widths are 1 (halfwidth character representation) or 2 (fullwidth character representation). If you do not specify a display width for a character in the file, the default is halfwidth.

This feature is documented in the IBM Informix GLS User's Guide.

SQL compatibility: LIMIT clause allowed after the Projection clause

You can include the new LIMIT clause after the optional ORDER BY clause in a SELECT statement. Use the LIMIT clause to specify the maximum number of rows the query can return. The LIMIT clause has the same effect as the LIMIT option, except that the LIMIT option must be included in the Projection clause of the SELECT statement.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Support for CORS requests in the REST API (12.10.xC4W1)

You can now set up cross-origin resource sharing (CORS) with the REST API. To do so, set the following optional parameters that were added to the jsonListener.properties file:

  • listener.http.accessControlAllowCredentials
  • listener.http.accessControlAllowHeaders
  • listener.http.accessControlAllowMethods
  • listener.http.accessControlAllowOrigin
  • listener.http.accessControlExposeHeaders
  • listener.http.accessControlMaxAge

Use these parameters to configure the HTTP headers of all responses. The HTTP headers provide access to JSON fields that are required by synchronous JavaScript + XML (AJAX) applications in a web browser when these applications access the REST listener.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Enhanced JSON compatibility

Informix now supports the following MongoDB 2.4 features:

  • Cursor support so that you can query large volumes of data.
  • Text search of string content in collections and tables.
  • Geospatial indexes and queries.
  • Pipeline aggregation operators.
  • The array update modifiers: $each, $slice, $sort.

You can perform the following new tasks that extend MongoDB functionality in your JSON application:

  • Import and export data directly with the wire listener by using the Informix JSON commands exportCollection and importCollection.
  • Configure a strategy for calculating the size of your database by using the Informix extension to the MongoDB listDatabases command: sizeStrategy option or command.listDatabases.sizeStrategy property.

You can customize the behavior of the wire listener by setting new properties. For example, you can control logging, caching, timeout, memory pools, and the maximum size of documents.

These features are documented in the IBM Informix JSON Compatibility Guide.

Access Informix from REST API clients

You can now directly connect applications or devices that communicate through the REST API to Informix. You create connections by configuring the wire listener for the REST API. With the REST API, you can use MongoDB and SQL queries against JSON and BSON document collections, traditional relational tables, and time series data. The REST API uses MongoDB syntax and returns JSON documents.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Create a time series with the REST API or the MongoDB API

If you have applications that handle time series data, you can now create and manage a time series with the REST API or the MongoDB API. Previously, you created a time series by running SQL statements. For example, you can program sensor devices that do not have client drivers to load time series data directly into the database with HTTP commands from the REST API.

You create time series objects by adding definitions to time series collections. You interact with time series data through a virtual table.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Basic text searching support for JSON and BSON data

You can now create a basic text search index on columns that have JSON or BSON data types. You can create the basic text search index on JSON or BSON data types through SQL with the CREATE INDEX statement or on BSON data types through the Informix extension to MongoDB with the createTextIndex command. You can control how JSON and BSON columns are indexed by including JSON index parameters when you create the basic text search index. You can run a basic text query on JSON or BSON data with the bts_contains() search predicate in SQL queries or the $ifxtext query operator in JSON queries.

This feature is documented in the IBM Informix Database Extensions User's Guide and the IBM Informix JSON Compatibility Guide.

Access BSON data from JDBC client applications

You can now easily access BSON data on the Informix server from JDBC client applications through the IfxBSONObject class.

This feature is documented in the IBM Informix JDBC Driver Programmer's Guide.

Quickly export relational tables to BSON or JSON documents

You can export relational tables to BSON or JSON documents faster by running the new genBSON() SQL function than by retrieving the data with MongoDB commands through the wire listener. For example, you can provide relational data to an application that displays data in JSON or BSON format. By default, the genBSON() function exports relational tables to a BSON document. You can cast the genBSON() function to JSON to create a JSON document.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Include JSON documents in time series

You can include JSON documents that are associated with timestamps in time series. For example, weather monitoring sensors that return 2 - 50 values in JSON documents through the REST API every 10 minutes. You store JSON documents with time series data as BSON documents in a BSON column in the TimeSeries data type.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Enhancements to the time series Java class library

When you write a Java application with the time series Java class library, now you can define time series objects with the new builder classes. Previously, you defined time series objects with string representations of SQL statements. Builder classes reduce the possibility of errors and improve usability. The methods in the Java class library run faster than in previous releases.

The time series Java class library has the following enhancements for creating time series objects:

  • You can now determine whether the definitions of two calendars or calendar patterns are the same.
  • You can create calendar patterns and calendars with new IfmxCalendarPattern.Builder and IfmxCalendar.Builder classes.
  • You can create and manage containers with the new TimeSeriesContainer and TimeSeriesContainer.Builder classes.
  • You can create TimeSeries row types with the new TimeSeriesRowType and TimeSeriesRowType.Builder classes.
  • You can create a simpler custom type map that uses a PatternClassMap instead of individual entries for each data type with the new TimeSeriesTypeMap and TimeSeriesTypeMap.Builder classes.

The IfmxTimeSeries class has the following enhancements for managing time series data:

  • You can insert data into a time series with the new IfmxTimeSeries.Builder class.
  • You can easily modify data and process query results because the results of queries on time series data are now JDBC updatable result sets.
  • You can distinguish between case sensitive and case insensitive databases and make multiple updates within a row.
  • You can convert the time series data to the appropriate time zone on the client.
  • You can select and update data by specifying similar data types instead of the exact data types. Data is implicitly cast during read and write operations. Previously, transactions that did not specify the exact data types failed.

These improvements are documented in the IBM Informix TimeSeries Data User’s Guide.

PAM password authentication for DRDA connections

You can implement password authentication through a pluggable authentication module (PAM) for Distributed Relational Database Architecture™ (DRDA) connections.

PAM is documented in the IBM Informix Security Guide.

What's new in IBM Informix version 12.10.xC3

Server changes

Informix 12.10.xC3 includes new or changed configuration parameters, environment variables, and system catalog tables.

These changes are summarized in the IBM Informix Migration Guide.

JSON compatibility pre- and post-migration requirements

Before you migrate from Informix 12.10.xC2, if you have binary JSON (BSON) columns with DATE fields you must unload the data from the database server in JSON format by using an external table. After you migrate to 12.10.xC3, you must load the data from the external table into a new database server table in BSON format.

If your databases were created in earlier versions of Informix 12.10, you must complete some post-migration steps for JSON compatibility:

  • Run the convTovNoSQL1210.sql script on databases that were originally created in 12.10.xC1 to make them JSON compatible.
  • If you used the JSON wire listener in 12.10.xC2 with a database that has any uppercase letters in its name, update your applications to use only lowercase letters in the database name.

These steps are described in the IBM Informix Migration Guide.

New reversion requirements

After you migrate to 12.10.xC3, you can revert to the version of the database server from which you migrated if the reversion requirements are met.

These changes are summarized in the IBM Informix Migration Guide.

Automatically configure the server during installation

If you create a server during installation, the server is configured based on your selections in the installation program, storage spaces are created, automatic tuning of resources for performance is enabled, and the JSON wire listener is started.

This feature is documented in the IBM Informix Installation Guide.

Automatic resource tuning for performance

You can configure the database server to adjust resources to improve performance:

  • Increase the size of the buffer pool: Include the extendable=1 option in the BUFFERPOOL configuration parameter value to make the buffer pool extendable. Use the new memory field to specify the size of the buffer pool in units of memory, such as MB or GB, instead of units of pages. Buffer pools are now stored in the buffer pool segment of shared memory.
  • Increase the number of logical log files: Set the AUTO_LLOG configuration parameter to 1, the name of the dbspace for logical logs, and optionally the maximum size of all logical log files.
  • Increase the number of CPU and AIO virtual processors: Include the autotune=1 option in the VPCLASS configuration parameter values for the CPU and AIO virtual processor settings. Optionally include a maximum number of CPU VPs.
  • Increase the size of the physical log size: Create a plogspace storage space to store the physical log by running the onspaces -c -P command. The plogspace is extendable by default.

If you create a server during installation, the buffer pool, logical log, and physical log are configured for automatic expansion. The number of expected users that you specify in the installation program sets the value of the AUTO_TUNE_SERVER_SIZE configuration parameter, which controls the sizes of the buffer pool, the dbspace for the logical log, the plogspace, and other automatically created storage spaces.

This feature is documented in the IBM Informix Administrator’s Guide, the IBM Informix Administrator’s Reference, and the IBM Informix Performance Guide.

Automatic location and fragmentation

In previous releases, the default location for new databases was the root dbspace. The default location for new tables and indexes was in the dbspace of the corresponding database. By default new tables were not fragmented. As of 12.10.xC3, you can enable the database server to automatically choose the location for new databases, tables, and indexes. The location selection is based on an algorithm that gives higher priority to non-critical dbspaces and dbspaces with an optimal page size. New tables are automatically fragmented in round-robin order in the available dbspaces.

Set the AUTOLOCATE configuration parameter or session environment option to the number of initial round-robin fragments to create for new tables. By default, all dbspaces are available. More fragments are added as needed when the table grows. You can manage the list of dbspaces for table fragments by running the admin() or task() SQL administration API command with one of the autolocate datatabase arguments.

The list of available dbspaces is stored in the SYSAUTOLOCATE system catalog table.

The event alarm 80001 indicates that a new fragment is added to an automatically fragmented table.

This feature is documented in the IBM Informix Administrator’s Guide, the IBM Informix Administrator’s Reference, the IBM Informix Guide to SQL: Syntax, and the IBM Informix Guide to SQL: Reference.

Control the size of private memory caches

By default, the size of a private memory cache increases and decreases automatically, depending on the needs of the associated CPU virtual processor. If you want to limit the size of private memory caches to the size that you specify with the VP_MEMORY_CACHE_KB configuration parameter, include a comma and the word STATIC after the size.

This feature is documented in the IBM Informix Administrator’s Reference.

Virtual shared memory segment size doubling

The maximum number of Informix shared memory segments is now 1024. To prevent the database server from reaching the maximum number of shared memory segments, the size of virtual shared memory segments that are added dynamically by the server doubles every 16 segments. The size of the first set of segments that are added to the virtual portion of shared memory is set by the SHMADD configuration parameter.

This feature is documented in the IBM Informix Administrator’s Reference.

Retrying connections

Previously, you might set the INFORMIXCONTIME and INFORMIXCONRETRY environment variables in the client environment before you started the database server. The values specified the number of seconds that the client session spends trying to connect to the database server, and the number of connection attempts. As of this fix pack, you also can control the duration and frequency of connection attempts in other ways.

You can use the SET ENVIRONMENT SQL statement to set the INFORMIXCONTIME and INFORMIXCONRETRY environment options for the current session. That statement overrides the values that are set by the other methods.

You can update the default values of the new INFORMIXCONTIME and INFORMIXCONRETRY configuration parameters in the database server configuration (onconfig) file. You can use the onmode -wf command to update the values permanently, or the onmode -wm command if you do not want the changes to persist after the server is restarted.

This feature is documented in the IBM Informix Administrator’s Reference and the IBM Informix Guide to SQL: Syntax.

Use the Mongo API to access relational data

You can write a hybrid MongoDB application that can access both relational data and JSON collections that are stored in Informix. You can work with records in SQL tables as though they were documents in JSON collections by either referencing the tables as you would collections, or by using the $sql operator on an abstract collection.

This feature is documented in the IBM Informix JSON Compatibility Guide.

Improved JSON compatibility

Informix now supports the following MongoDB features:
  • The findAndModify command, which performs multiple operations at the same time.
  • The MongoDB authentication methods for adding users and authenticating basic roles, such as read and write permissions for database and system level users.

These features are documented in the IBM Informix JSON Compatibility Guide.

Temporarily prevent constraint validation

You can significantly increase the speed of loading or migrating large tables by temporarily preventing the database server from validating foreign-key referential constraints. You can disable the validation of constraints when you create constraints or change the mode of constraints to ENABLED or FILTERING.

  • You include the NOVALIDATE keyword in an ALTER TABLE ADD CONSTRAINT statement or in a SET CONSTRAINTS ENABLED or SET CONSTRAINTS FILTERING statements.
  • If you plan to run multiple ALTER TABLE ADD CONSTRAINT or SET CONSTRAINTS statements, run the SET ENVIRONMENT NOVALIDATE ON statement to disable the validation of foreign-key constraints during the current session.

When you migrate data, include the -nv option in the dbimport command.

The NOVALIDATE keyword prevents the database server from checking every row for referential integrity during ALTER TABLE ADD CONSTRAINT and SET CONSTRAINTS operations on foreign-key constraints. When those statements finish running, the database server automatically resumes referential-integrity enforcement of those constraints in subsequent DML operations.

Use this feature only on tables whose enabled foreign-key constraints are free of violations, or when the referential constraints can be validated after the tables are loaded or migrated to the target database.

This feature is documented in the IBM Informix Guide to SQL: Syntax the IBM Informix Performance Guide, and the IBM Informix Migration Guide.

Faster creation of foreign-key constraints

When you run the ALTER TABLE ADD CONSTRAINT statement, some foreign-key constraints can be created faster if the table has a unique index or a primary-key constraint that is already defined on the columns in the foreign-key constraint.

Foreign-key constraints are not created faster, however, if the constraint key or index key includes columns of user-defined or opaque data types, including BOOLEAN and LVARCHAR, or if other restrictions are true for the foreign-key constraint or for the referenced table.

This feature is documented in the IBM Informix Guide to SQL: Syntax and the IBM Informix Performance Guide.

Find the quarter of the calendar year for dates

You can find the quarter of the calendar year for a date by running the QUARTER function. The QUARTER function accepts a DATE or DATETIME argument, and returns an integer in the range 1 - 4, indicating the quarter of a calendar year. For example, on any date in July, August, or September, the expression QUARTER (CURRENT) returns 3 because those dates are in the third quarter of a calendar year. You can include the QUARTER function in queries and in statements to define a distributed storage strategy that is based on a DATE or DATETIME column, such as the PARTITION BY EXPRESSION clause of the CREATE TABLE or ALTER TABLE statements.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Improvements to Connection Manager

If you use Connection Manager to manage client connections, you can use the following new POLICY values in a service-level agreement:

  • Use the ROUNDROBIN policy to direct client connection requests in a repeating, ordered fashion (round-robin) to a group of servers.
  • Use the SECAPPLYBACKLOG policy to redirect connections away from secondary, high-availability cluster servers that have apply backlogs over a specific threshold.

If failover processing in a high-availability cluster cannot complete, the Connection Manager now automatically retries failover processing at 1-minute intervals. The RETRY attribute of the Connection Manager FOC parameter is ignored because that attribute is not supported as of this fix pack.

These features are documented in the IBM Informix Administrator’s Guide.

View log-staging information on RS secondary servers

If you configure a remote stand-alone (RS) secondary server to delay or stop the application of log records, you can view log-staging information by running the onstat -g rss verbose command on the RS secondary server. Output for the onstat -g rss verbose command now includes buffer and page information for data that is being staged.

This feature is documented in the IBM Informix Administrator’s Reference.

Easier configuration and cloning of a server for replication

If you create a server during installation, you can easily create an Enterprise Replication domain or a high-availability cluster. Previously, you had to configure connectivity manually on each server.

Run the cdr autoconfig serv command to configure connectivity and start Enterprise Replication.

Run the ifxclone command with the -autoconf option to clone a server, configure connectivity, and start replication. You can now create HDR and shared-disk secondary servers with the ifxclone utility.

These features are documented in the IBM Informix Administrator’s Guide, the IBM Informix Administrator’s Reference, and the IBM Informix Enterprise Replication Guide.

Shard data across Enterprise Replication servers

Using Enterprise Replication, Informix can now horizontally partition (shard) a table or collection across multiple database servers. When you create a sharding definition through the cdr utility, rows from a table or documents from a collection can be distributed across the nodes of an Enterprise Replication system, reducing the number of rows or documents and the size of the index on each node. When you distribute data across database servers, you also distribute performance across hardware. As your database grows in size, you can scale up by adding more database servers.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Enhancements for handling spatial data

You can transform spatial data between spatial reference systems that are in different geographic coordinate systems. Previously, you transformed data only within the same geographic coordinate system.

You can choose from many more predefined spatial reference systems instead of defining most of the systems that you need.

You can calculate the distance and area for data that is based on the round-Earth model. If your geometries have a spatial reference system that is based on angular units, you can calculate distance and area in meaningful linear units. Specify the appropriate unit of measure to convert angular units to linear units in the ST_Area, ST_Buffer, ST_Distance, ST_Length, and ST_Perimeter functions. You can specify predefined units of measure or define your own units of measure. The SE_Nearest function calculates distance between geometries that are in geographic coordinate systems by applying the linear unit of measure of meters.

Informix spatial data types now conform to the OpenGIS Simple Features Specification for SQL Revision 1.1 and the ISO/IEC 13249-3 SQL/MM Part 3: Spatial. The Informix spatial solution is based on the ESRI SDE 10.2 Shape and PE libraries.

These features are documented in the IBM Informix Spatial Data User's Guide.

Efficient storage for hertz and numeric time series data

You can save disk space by packing multiple time series records in each element. If your data is recorded with a regular subsecond frequency, you can define a hertz time series to pack records for a second of data in each time series element. If all the columns in your TimeSeries data type are numeric, you can define a compressed time series to pack and compress up to 4 KB of records in each time series element.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Control the destroy behavior for rolling window containers

You can limit the number of partitions of a rolling window container that can be destroyed in an operation. You control how many partitions are destroyed and whether active partitions can be destroyed when the number of partitions that must be detached is greater than the size of the dormant window. When you create a rolling window container, set the destroy_count parameter to a positive integer and the window_control parameter to 2 or 3 in the TSCreateContainer function. You can change the destroy behavior of an existing rolling window container by including the wcontrol parameter in the TSContainerManage function.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Monitor groups of containers with wildcard characters

You can monitor groups of containers that have similar names. Include the wildcard characters for the MATCHES operator in the parameter for the container name in the TSContainerUsage, TSContainerTotalPages, TSContainerTotalUsed, TSContainerPctUsed, TSContainer, and TSContainerNElems functions.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster queries by running time series routines in parallel

Time series SQL routines that you include in the WHERE clause of SELECT statements return results faster when they run in parallel. If you fragment the table that contains the time series data and enable PDQ, time series SQL routines run in parallel.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster queries with IN conditions through virtual tables

Access methods that are created through the virtual table interface now process IN conditions in query predicates that operate on simple columns. Processing through a virtual table interface is generally faster than SQL processing. For example, queries with IN conditions that you run on time series virtual tables now run faster than in previous releases.

This feature is documented in the IBM Informix Virtual-Table Interface Programmer's Guide.

Accelerate warehouse queries in-memory using data from multiple sources

You can use synonyms and views to combine data from multiple sources, including JSON and time series data. The synonyms and views can include references to a local table, a table in a different database on the same instance, or a remote table (table in a different database and different instance). The usage of synonyms and views might require changes to the uniquecheck parameter on the use_dwa environment variable.

You now reference tables in different databases, tables in databases of the same Informix instance, or tables in a different Informix instance.

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide.

What's new in IBM Informix version 12.10.xC2

Installation enhancements

While you install IBM Informix, you now select the number of expected users of the database server. This number is used to optimize the Informix instance that can be created as part of the installation process.

If you choose to create a database instance that is initialized and ready to use after installation, it is configured with automatic storage provisioning and other auto-tuning features that reduce the need for administration and maintenance. Also, that instance is enabled for JSON compatibility.

If you install IBM Informix JSON functionality as part of an IBM Informix server installation on a Windows operating system, IBM Informix automatically creates the ifxjson user, which is required for communicating with the wire listener. You must create a password for the ifxjson account.

These features are documented in the IBM Informix Installation Guide.

Server changes

12.10.xC2 includes new and changed configuration parameters and environment variables. This fix pack also supports these new SQL keywords: BSON, JSON, LATERAL.

These changes are summarized in the IBM Informix Migration Guide.

Changes to online notes

Documentation notes and defect files are no longer included in the product. See the Documentation notes for 12.10.xC2, Fixed defects for 12.10.xC2, and Known defects for 12.10.xC2.

The release notes are provided only in HTML format. Text versions of release notes are no longer included in the product.

The Client Software Development Kit release notes are consolidated into two release notes files: Windows platforms and UNIX, Linux, and Mac OS X platforms. The Informix GLS and Informix JDBC Driver release notes remain separate.

If you upgrade from Informix version 12.10.xC1, the 12.10.xC1 files that are no longer included for 12.10.xC2 remain in your installation.

Converting 12.10.xC1 databases for JSON compatibility

Before you can use JSON features with databases that were created in 12.10.xC1, you must run the convTovNoSQL1210X2.sql script after you install 12.10.xC2. However, if the databases do not require JSON compatibility, you do not have to run this script.

These changes are summarized in the IBM Informix Migration Guide.

Reversion requirements

After you migrate to 12.10.xC2, you can revert to the version of the database server from which you migrated as long as the reversion requirements are met.

Those requirements are summarized in the IBM Informix Migration Guide.

New event alarm for network failures

The event alarm 84001 appears if the database server cannot bind to the port that is listed in the sqlhosts file. Event alarm 84001 has a severity of 3 and is accompanied by an assertion warning in the online log file. The user action to solve the problem is to correct any errors in the host name or IP address, the service name, or the port number entries in the sqlhosts file.

This feature is documented in the IBM Informix Administrator’s Reference.

Customized help is not available

In previous versions of IBM OpenAdmin Tool (OAT) for Informix, you could add or modify the context help topics. You can no longer customize help topics in OAT.

Find what you need by searching OAT

Instead of navigating the IBM OpenAdmin Tool (OAT) for Informix menu, you can search by function. In the Search text box, enter a keyword or phrase. For example, enter onstat to go directly to the onstat utility page where you can display performance and status information for a database server. Search terms are not case-sensitive. Boolean operators, quotation marks, and wildcard characters are not supported.

Optimize backups for data deduplication

If your storage manager is enabled for data deduplication, use the IFX_BAR_USE_DEDUP environment variable to make backup operations more efficient. The new environment variable optimizes the format of backup images for deduplication processes.

This feature is part of storage optimization and is documented in the IBM Informix Backup and Restore Guide.

Enhanced support for IBM Tivoli Storage Manager features

If you use IBM Tivoli® Storage Manager (TSM) with the ON-Bar utility to back up and restore data, you can configure ON-Bar to support the following TSM features:
  • You can speed back up and restore processes by setting a longer transfer buffer size. Set the IFX_BAR_NO_LONG_BUFFERS environment variable to prevent long transfer buffers.
  • You can replicate or import and export backup objects between TSM servers. Set the IFX_TSM_OBJINFO_OFF environment variable to prevent this feature.

These features are documented in the IBM Informix Backup and Restore Guide.

Restore critical files

You can restore the following critical files that you backed up with the ON-Bar utility:
  • The onconfig file
  • UNIX: The sqlhosts file
  • The ON-Bar emergency boot file: ixbar.servernum
  • The server boot file: oncfg_servername.servernum
Run the onbar -r command with the -cf yes option to restore critical files during a cold restore. Run the onbar -r -cf only command to restore only the critical files. Previously, you restored critical files by using storage manager commands.

This feature is documented in the IBM Informix Backup and Restore Guide.

ON-Bar activity log timestamps

When a storage manager process hangs, the timestamp for the process in the ON-Bar activity log is inaccurate. The inaccurate timestamp represents the time at which the storage manager process started hanging instead of the current time. Inaccurate timestamps in the ON-Bar activity log are identified with an asterisk. The accompanying message lists the number of minutes after the timestamp that the storage manager process hung.

This feature is documented in the IBM Informix Backup and Restore Guide.

Compare onconfig files (UNIX, Linux)

You can compare two onconfig files and show the differences between them by running the onconfig_diff utility. For example, after you upgrade to a new version of Informix, you can compare the new onconfig file to the onconfig file from the earlier version of Informix.

This feature is documented in the IBM Informix Administrator’s Reference.

In-place alter operations on serial data types

The ALTER TABLE statement converts the following column data types with in-place alter operations:
  • SERIAL to SERIAL8
  • SERIAL to BIGSERIAL
  • SERIAL8 to BIGSERIAL
  • BIGSERIAL to SERIAL8

Previously such data types were converted with slow alter operations. In-place alter operations require less space than slow alter operations and make the table available to other sessions faster.

This feature is documented in the IBM Informix Performance Guide.

Faster queries for the Committed Read Last Committed isolation level

Queries that run with the Committed Read Last Committed isolation level can now use read ahead and light scans to run faster than in previous releases.

Dynamic private memory caches for CPU virtual processors

Private memory caches for CPU virtual processors now change size automatically as needed. You create private memory caches by setting the VP_MEMORY_CACHE_KB configuration parameter to the initial size of the caches. The size of a private memory cache increases and decreases automatically, depending on the needs of the associated CPU virtual processor. Previously, the size of private memory caches was limited to the value of the VP_MEMORY_CACHE_KB configuration parameter.

The onstat -g vpcache command now displays the target size for each bin in the cache before draining starts and the last time that each bin was drained.

This feature is documented in the IBM Informix Performance Guide and the IBM Informix Administrator’s Reference.

Monitor resource contention

You can view the dependencies between blocking and waiting threads by running the onstat -g bth command. Run the onstat -g BTH command to display session and stack information for the blocking threads.

This feature is documented in the IBM Informix Performance Guide and the IBM Informix Administrator’s Reference.

JSON compatibility

The Informix database server provides features that reduce the time and complexity of enabling some applications to work with Informix. Applications using the JSON-oriented query language that was created by MongoDB can interact with data stored in Informix databases. You can use MongoDB community drivers to insert, update, and query JSON documents in Informix. The Informix database server also provides built-in JSON and BSON data types.

Some of these features, including the following ones, are enabled by default when you select the Typical installation path:
  • A JSON wire listener, which provides client communication between MongoDB community drivers and the database server.
  • The IBM Informix JSON Plug-in for OpenAdmin Tool (OAT), for monitoring your JSON documents.
  • A preconfigured database instance that is initialized and ready to use after installation.
The following features provide dynamic scaling and high-availability for data-intensive applications:
  • Define a shard cluster to easily add or remove servers as your requirements change.
  • Use shard keys to distribute subsets of data across multiple servers in a shard cluster.
  • Query the correct servers in a shard cluster and return the consolidated results to the client application.
  • Use secondary servers (similar to slaves in MongoDB) in the shard cluster to maximize availability and throughput. Secondary servers also have update capability.

These features are documented in the IBM Informix JSON Compatibility Guide.

JSON compatibility in OAT

You can monitor JSON collections in your Informix database with the IBM Informix JSON Plug-in for OpenAdmin Tool (OAT). With the JSON plug-in, you can view information such as a list of all collections, indexes on a collection, and privileges. You can choose not to install this plug-in, which is installed by default.

You can also enable sharding for JSON data with OAT. You can add or delete hosts for the trusted host relationships for the database server. Database servers in a shard cluster must have trusted-host relationships. The values are updated in the file that is referenced by the database server REMOTE_SERVER_CFG configuration parameter. Go to the Server Administration > Configuration page, and click the Trusted Hosts tab.

For more information, see the JSON plug-in help and the OAT help.

Joins with lateral references

In queries that join result tables in the FROM clause, you can now use the LATERAL keyword to reference previous table and column aliases in the FROM clause. The LATERAL keyword must immediately precede any query in the FROM clause that defines a derived table as its result set, if that query references any table or column that appears earlier in the left-to-right order of FROM clause syntax elements. For SELECT statements that join derived tables, lateral table and column references comply with the ISO/ANSI standard for SQL syntax, and can improve performance. Lateral references are also valid in DELETE, UPDATE, and CREATE VIEW statements that include derived tables.

This functionality is documented in the IBM Informix Guide to SQL: Syntax.

Enhanced basic text searching

You have several new options to customize basic text searching when you create a bts index. You can increase the maximum number of tokens to index in a document with the field_max_token index parameter, instead of being limited to the previous maximum of 10 000 tokens. You can build the bts index faster in RAM than in a temporary sbspace by including the xact_ramdirectory="yes" index parameter. You can limit the amount of memory that is available for basic text search operations with the xact_memory index parameter. You can index words that contain numbers and other characters by specifying the Alnum analyzer.

This feature is documented in the IBM Informix Database Extensions User's Guide.

IBM Informix .NET Provider support for .NET Framework versions

The IBM Informix .NET Provider is an extension of the ADO.NET interface that allows .NET applications to access Informix database servers.

The provider now supports .NET Framework 2.0 through 4.5:
  • Use the Informix .NET Framework 2.0 Provider for .NET Framework 2.0, 3.0, and 3.5.
  • Use the Informix .NET Framework 4.0 Provider for .NET Framework 4.0 and 4.5.

The Informix .NET Provider support for .NET Framework 1.1 is deprecated. For application compatibility across .NET Framework versions, refer to the information provided by Microsoft.

This functionality is documented in the IBM Informix .NET Provider Reference Guide.

Defining separators for fractional seconds in date-time values

Now you can control which separator to use in the character-string representation of fractional seconds. To define a separator between seconds and fractional seconds, you must include a literal character between the %S and %F directives when you set the GL_DATETIME or DBTIME environment variable, or when you call the TO_CHAR function. By default, a separator is not used between seconds and fractional seconds. Previously, the ASCII 46 character, a period ( . ), was inserted before the fractional seconds, regardless of whether the formatting string included an explicit separator for the two fields.

This functionality is documented in the IBM Informix GLS User's Guide, the IBM Informix Guide to SQL: Syntax, and the IBM Informix Guide to SQL: Reference.

Simplified schema changes for replicated tables

If you make many changes to the schema of replicated tables that belong to a replicate set, you can easily update the replicate definitions to reflect the schema changes. After you alter replicated tables, run the cdr define replicateset command with the --needRemaster option to derive a replicate set that consists of only the replicates that are affected by the alter operations. You remaster the derived replicate set by running the cdr remaster replicateset command. You do not need to update or remaster every replicate individually.

If you want to only drop multiple columns from multiple replicated tables, you can run the cdr remaster command with the --remove option.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Set up and query time series data through a grid

If you plan to replicate time series data, you can set up time series through a grid. You can run the commands to set up time series on one grid server and propagate the commands to the other grid servers.

You can query time series data in the context of a grid. However, you can run a grid query only on a virtual table that is based on a table that has a TimeSeries column.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Control the replication of large objects

By default, when any column in a replicate row is changed, Enterprise Replication replicates the entire row. However, to improve performance, columns that contain a large object are replicated only when the content of the large object changes. You can force the replication of large objects by including the --alwaysRepLOBs=y option with the cdr define replicate, cdr modify replicate, or cdr define template command. Always including large object columns in replicated rows can be useful if you have a workflow replication system.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Custom checksum function for consistency checking

When you check the consistency of replicated rows, a checksum is generated for each row on each server and then the corresponding checksums are compared. You can write your own checksum function instead of using the checksum function that is supplied with the database server.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Shard tables across database servers

You can now shard, or horizontally partition, a table across multiple database servers. Rows from a table can be distributed across a cluster of database servers, which reduces the number of rows and the size of the index for the database of each server. When you distribute data across database servers, you also distribute performance across hardware, which can result in significant performance improvements. As your database grows in size, you can scale up by adding more database servers.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicate time series data with all high-availability clusters

You can now replicate time series data with all types of high-availability clusters. Previously, you could replicate time series data only with High-Availability Data Replication (HDR) clusters, and not with shared-disk secondary and remote stand-alone secondary clusters. Secondary servers must be read-only.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Order TimeSeries columns in query results

You can include a TimeSeries column in an ORDER BY clause of an SQL query. The ORDER BY clause sorts the results from the TimeSeries column by the time series instance ID.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Improvements for time series loader programs

You have new options for how you flush time series data to disk when you write a loader program. You can flush time series elements for all containers to disk in a single transaction or in multiple transactions. If you want your client application to control transactions, run the TSL_FlushAll function. The TSL_FlushAll function flushes time series elements to disk in one transaction. If you want the loader program to control the size of your transactions, run the TSL_Commit function. The TSL_Commit function flushes time series elements to disk in multiple transactions, based on the commit interval that you specify.

You can view the results of the data flushing function by running the TSL_FlushInfo function.

You can specify that no duplicate elements are allowed when you flush time series data to disk.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster aggregation of an interval of time series data

You can aggregate an interval of time series data faster by including start and end dates in the TSRollup function. Previously, you selected an interval of time series data with the Clip or similar function and passed the results to the TSRollup function.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster queries on time series virtual tables

You can run queries in parallel on a virtual table that is fragmented. The virtual table must be based on a time series table that is fragmented by expression. Include the fragment flag in the TSVTMode parameter when you create the virtual table.

You can include the flags for the TSVTMode parameter as a set of strings instead of as a number.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Accelerate queries on time series data

You accelerate queries on time series data by creating data marts that are based on time series virtual tables.

You can define virtual partitions so that you can quickly refresh the data in part of the data mart or continuously refresh the data. You can make queries faster by limiting the amount of data in the data mart to specific time intervals.

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide and the IBM Informix TimeSeries Data User’s Guide.

Load data from external tables into data marts

You can load data directly from external tables into an Informix Warehouse Accelerator data mart. You no longer need to load data from external tables into an Informix database before you transfer the data to Informix Warehouse Accelerator.

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide.

Transport Layer Security (TLS) versions 1.0, 1.1 and 1.2 enabled by default

Informix database server-client connections are now enabled by default at the Transport Layer Security (TLS) versions 1.0, 1.1 and 1.2. Previously, the default version was 1.0. TLS is the successor to Secure Sockets Layer (SSL) and provides cryptographic protocols for client/server connections. You can use the new TLS_VERSION configuration parameter to change the TLS connection versions to accommodate the security needs and client connections of your enterprise.

This feature is documented in the IBM Informix Administrator’s Reference and the IBM Informix Security Guide.

What's new in IBM Informix version 12.10.xC1

OpenAdmin Tool (OAT) for Informix is installed by default with the Client SDK

The IBM OpenAdmin Tool (OAT) for Informix is installed by default during a typical installation of the following products:
  • IBM Informix Client Software Development Kit (Client SDK), Version 4.10
  • IBM Informix Connect, Version 4.10

Use the custom installation option if you do not want to install OAT. Previously, OAT was not installed by default.

This feature is documented in the IBM Informix Client Products Installation Guide.

Upgrading to Version 12.10

If you are migrating from Informix Version 11.70, 11.50, 11.10, or 10.0, you can migrate directly to Informix Version 12.10.

This process is documented in the IBM Informix Migration Guide.

Autonomic storage management for rolling window tables

For tables that use a RANGE INTERVAL distributed storage strategy, the database server creates new fragments automatically when an inserted record has a fragment key value outside the range of any existing fragment. You can use new DDL syntax to define a purge policy for archiving or deleting interval fragments after the table exceeds a user-specified limit on its allocated storage space size, or on the number of its interval fragments. A new built-in Scheduler task runs periodically to enforce the purge policies on qualifying rolling window tables.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Manage rolling window tables in OAT

You can create a rolling window table in OAT. On the SQL ToolBox > Schema Manager page, select the date-range fragmentation or range fragmentation storage scheme and enable the rolling window options. When a table exceeds a specified size, you can configure which fragments are eligible for removal. Choose whether only interval fragments, or either interval or range fragments are removed. You can also choose to have range fragments removed only after all the interval fragments are removed.

For more information, view the online help.

Enhanced CREATE TABLE and ALTER FRAGMENT interval fragment syntax

For tables that are fragmented by RANGE INTERVAL, you can create an SPL routine to return a dbspace name. If this UDF is a function expression after the STORE IN keywords of the interval fragment clause, the database server calls the UDF when it creates new interval fragments for the table. As an alternative to a list of dbspaces, this syntax is valid for interval fragments of rolling window tables, or for interval fragments of tables with no purge policy.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

OLAP windowed aggregate functions

This release introduces support for online analytical processing (OLAP) functions to provide ordinal ranking, row numbering, and aggregate information for subsets of the qualifying rows that a query returns. These subsets are based on partitioning the data by one or more column values. You can use OLAP specifications to define moving windows within a partition for examining dimensions of the data, and identifying patterns, trends, and exceptions within data sets. You can define window frames as ranges of column values, or by position relative to the current row in the window partition.

If you invoke the built-in aggregate function COUNT, SUM, AVG, MIN, MAX, STDEV, VARIANCE, or RANGE from an OLAP window, the return value is based on only the records within that window, rather than on the entire result set of the query. In nested queries, the set of rows to which each OLAP function or aggregate is applied is the result set of the query block that includes the OLAP function.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Enhanced control over how the ORDER BY query results sort null values

When you are sorting ORDER BY query results, you can use the new keyword options of NULLS FIRST or NULLS LAST to specify the result set order of rows that have a null sort key value.

Earlier releases treated null values differently:
  • Put null values last if the ORDER BY clause specified DESC for a descending order
  • Put null values first if DESC was omitted, or if the ORDER BY clause explicitly specified ASC for an ascending order

This feature is documented in the IBM Informix Guide to SQL: Syntax.

SQL DISTINCT expressions as arguments to COUNT, SUM, AVG, and STDDEV aggregates

In earlier releases, queries can call the built-in COUNT, SUM, AVG, and STDDEV functions in a column or column expression. This release extends the domain of COUNT, SUM, AVG, STDDEV arguments to SQL DISTINCT expressions, including CASE expressions.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Multiple DISTINCT aggregate functions in a query

You can now include multiple aggregate functions that return DISTINCT values in a query. For example, you can include multiple COUNT(DISTINCT) specifications in a single query instead of writing a separate query for each COUNT aggregate.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Faster ANSI join queries

ANSI outer join queries that have equality joins can run faster because the optimizer now uses either a hash join or a nested loop on a cost basis. In earlier releases, used only nested loop joins in ANSI outer joins.

Temporary table projection optimization for views and derived tables

Applications and analytic tools can define a query in which a derived table contains multiple views joined with base tables, potentially including hundreds of columns. The database server materializes this query in a system-generated temporary table. The parent query, however, might project only a few columns.

The database server creates internally generated temporary tables that include only the columns that are specified in the Projection list, the WHERE clause, the ORDER BY clause, and in other clauses of the immediate parent query. By excluding unnecessary columns from the temporary table, the database server uses storage resources efficiently and avoids I/O operations on the columns that do not contribute to the query result.

This feature is documented in the IBM Informix Administrator’s Guide.

Enhanced SQL statements to store query results in permanent tables

You can store query results in permanent tables when you create tables in your database and when you retrieve data from tables.

Use the CREATE TABLE AS ...SELECT FROM statement to define a permanent database table that multiple sessions can access. In previous releases, you had to use the CREATE TABLE statement to create empty tables, and then use the INSERT INTO...SELECT FROM... statements to store data in the tables.

Use the SELECT...INTO statement to store query results in a permanent table. In previous releases, SELECT statements stored query results only in temporary or external table objects.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

The IBM Informix SPL language now includes the CASE statement

SPL routines now can use the CASE statement as a faster alternative to IF statements to define a set of conditional logical branches, which are based on the value of an expression. This syntax can simplify migration to of SPL applications that were written for Extended Parallel Server or for other database servers.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

SPL routines for application compatibility

The SQL packages extension provides SPL (Stored Procedure Language) routines that you can use in an application that is compatible with other database servers. For example, the packages include large object handling, alert and message management, and random number generation.

This feature is documented in the IBM Informix Database Extensions User's Guide.

Enhanced support for OUT and INOUT parameters in SPL routines

SPL user-defined routines and C user-defined routines with OUT or INOUT arguments can be invoked from other SPL routines. The OUT and INOUT return values can be processed as statement-local variables or as local SPL variables of SQL data types. The SPL routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, and SERIAL8. The C routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, SERIAL8, and ROW.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

CASE expressions in the ORDER BY clause of SELECT statements

The ORDER BY clause now can include a CASE expression immediately following the ORDER BY keywords. This expression allows sorting key specifications for query result sets to be based on evaluating multiple logical conditions.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

INTERSECT and MINUS set operators for combined query results

You can use the INTERSECT and MINUS set operators to combine two queries. Like UNION or UNION ALL set operators, these set operators combine the result sets of two queries that are their left and right operands.
  • INTERSECT returns only the distinct rows that are in both the left and the right query results.
  • MINUS returns only the distinct rows from the left query result that are not in the right query result.
You can use EXCEPT as a keyword synonym for the MINUS set operator.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Simplified CREATE EXTERNAL TABLE syntax for loading and unloading data

Data files from external sources with special delimiter characters can be loaded and unloaded more easily than in previous releases. Use the CREATE EXTERNAL TABLE statement, with or without the ESCAPE ON keywords, and specify the DELIMITER keyword. The database server inserts the escape character immediately before any delimiter character that occurs in the data. You can use the ESCAPE OFF keywords to improve performance if the data does not contain special delimiter characters.

This feature is documented in the IBM Informix Guide to SQL: Syntax.

Improve space utilization by compressing, repacking, and shrinking B-tree indexes

You can use SQL administration API commands or CREATE INDEX statements to save disk space by compressing B-tree indexes. You can also use SQL administration API commands to consolidate free space in a B-tree index, return this free space to the dbspace, and estimate the amount of space that is saved by compressing the indexes.

Index compression is documented in the IBM Informix Administrator’s Reference, the IBM Informix Administrator’s Guide, and the IBM Informix Guide to SQL: Syntax.

Save disk space by compressing simple large objects in dbspaces

You can use SQL administration API commands to save disk space by compressing simple large objects (TEXT and BYTE data types) that are stored in the same partition in the same dbspace as the table in which they are referenced. When you run an SQL administration API compress or uncompress command, the database server compresses both the table row data and the referenced simple large objects. You can choose to compress or uncompress only the table row data or only the referenced simple large objects.

The compression of simple large objects in dbspaces is documented in the IBM Informix Administrator’s Reference and the IBM Informix Administrator’s Guide.

Manage compression in IBM OpenAdmin Tool (OAT) for Informix

OAT has new compression options on the Space Administration > Storage > Tables and Indexes page:
  • You can enable automatic compression when you compress existing tables and fragments. You disable automatic compression when you uncompress a table or fragment.
  • You can compress indexes. You can consolidate free space in indexes (repack) and return free space to the dbspace (shrink). You can also estimate the amount of space that you can save if you compress an index. An index must have at least 2000 keys to be compressed.
  • You can compress simple large objects (TEXT and BYTE data types) when you compress tables and fragments. You can also estimate the amount of space that you can save if you compress simple large objects.
IBM Informix Schema Manager Plug-in for OpenAdmin Tool (OAT) has the following compression options on the SQL ToolBox > Schema Manager page:
  • You can enable automatic compression of row data when you create a table. The database server automatically creates a compression dictionary and compresses the data when 2000 or more rows of data are loaded into the table or fragment.
  • You can compress an index when you create an index that has at least 2000 keys.

The new compression options require Informix 12.10.

For more information, view the online help.

Save disk space by enabling automatic data compression

You can use the COMPRESSED keyword with the CREATE TABLE statement to enable the automatic compression of large amounts of in-row data when the data is loaded into a table or table fragment. Then, when 2,000 or more rows of data are loaded, the database server automatically creates a compression dictionary and compresses the new data rows that are inserted into the table.

Also, when you run SQL administration API create dictionary and compress commands on existing tables and fragments, you enable the automatic compression of subsequent data loads that contain 2,000 or more rows of data. If you run an uncompress command, you disable automatic compression.

In addition to saving space, automatic compression saves time because you do not have to compress the data after you load it.

This feature is documented in the IBM Informix Administrator’s Reference, the IBM Informix Administrator’s Guide, and the IBM Informix Guide to SQL: Syntax.

Enhanced built-in storage management for backup and restore

IBM Informix Primary Storage Manager, which replaces IBM Informix Storage Manager (ISM), is easier to set up and use, even in embedded environments. You use the Informix Primary Storage Manager onpsm utility to manage storage for ON-Bar backup and restore operations, including parallel backups, that use file devices (disks).

The onsmsync utility provides new commands that you can use to export backups to, and import them from, Informix Primary Storage Manager external device pools.

Informix Primary Storage Manager and the onsmsync utility are documented in the IBM Informix Backup and Restore Guide.

Configure automatic backups with the ON-Bar utility

After you set up a storage manager, you can configure automatic backups with the ON-Bar utility in IBM OpenAdmin Tool (OAT) for Informix. For example, you can use ON-Bar with the new IBM Informix Primary Storage Manager to manage your storage spaces and backup schedule. Use the Backup wizard on the Space Administration > Backup page. In previous versions of OAT, you could use only the ontape utility for backups.

For more information, view the online help.

Dynamically configure the database server

You can dynamically configure the database server in the following ways:
  • Dynamically modify many configuration parameters by using the onmode command, OAT, or the SQL administration API commands.
  • Dynamically export and import configuration parameters.
  • Use the new AUTO_TUNE configuration parameter to enable or disable all automatic tuning.

You can view more information about parameters, including current values, valid ranges, and parameter descriptions, with onstat commands.

This feature is documented in the IBM Informix Administrator’s Reference, the IBM Informix Administrator’s Guide, and the IBM Informix Embeddability Guide.

Easily configure an embedded server

You can now configure embedded servers so they require less setup:
  • Embed any environment variable into any configuration parameter value (easy porting to different servers)
  • Simplify configuration files (when starts, it uses only a few critical parameters, and does not use the onconfig file)
  • Turn automatic tuning on or off with one new configuration parameter
  • Export and import configuration parameter values

This feature is documented in the IBM Informix Embeddability Guide.

Managing server connections on Windows operating systems

On Windows operating systems, you now configure connectivity information for Informix servers by using the sqlhosts file, not the Windows registry. The file is installed in %INFORMIXDIR%\etc\sqlhosts.%INFORMIXSERVER%, and it uses the same format as the sqlhosts file on UNIX operating systems. The sync_registry Scheduler task automatically converts the connection information between the sqlhosts file format and the Windows registry format. The task runs every 15 minutes. You can manually convert the connection information between the sqlhosts file format and the Windows registry format by running the syncsqlhosts utility.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Embeddability Guide.

Set local environment variables for Informix instances (UNIX)

You can set local environment variables in the onconfig file for an instance. These settings are independent of the global or system environment variable settings on the computer. The settings can be used by the following utilities: oncheck, onclean, oninit, onload, onlog, onmode, onparams, onspaces, onstat, ontape, onunload. When you run the utility you must specify the -FILE option before any other options.

This feature is documented in the IBM Informix Administrator’s Reference.

Distributed query support for non-root installations of

You can run distributed queries with non-root installations of . Set trusted users with the REMOTE_USERS_CFG configuration parameter or set trusted hosts with the REMOTE_SERVER_CFG configuration parameter. Previously, distributed queries required servers with root privileges.

Improved ALARMPROGRAM configuration parameter behavior

If the script that the ALARMPROGRAM configuration parameter specifies cannot be located, the ALARMPROGRAM configuration parameter is set to the no_log.sh or no_log.bat script. Update the value of the ALARMPROGRAM configuration parameter to specify a custom script. Previously, if the script did not exist, or if the file path was specified incorrectly, event alarms were not displayed.

This feature is documented in the IBM Informix Administrator’s Reference.

New default values for configuration parameters

The following configuration parameters have new default values in the onconfig.std file:
ROOTSIZE
Previous value = 200000
New value = 300000
DS_NONPDQ_QUERY_MEM
Previous value = 128
New value = 256 on UNIX, 128 on Windows
GSKIT_VERSION
Previous value = 7
New value = Not set. The version of GSKit that is installed with Informix is used.
SDS_LOGCHECK
Previous value = 0
New value = 10 on UNIX, 0 on Windows
DRINTERVAL
Previous value = 30
New value = 0

These changes are documented in the IBM Informix Administrator’s Reference.

New Welcome pages in OAT

When you start IBM OpenAdmin Tool (OAT) for Informix, new welcome pages can help you get started using OAT. From the OAT Welcome page and the OAT Admin Welcome page, you can link to other OAT pages to customize OAT, manage server connections, check server status, view performance history, manage storage space, and learn more about OAT. You can bypass the welcome pages by setting a different page as the home page.

Monitor Informix servers from your mobile device

Use IBM Mobile OpenAdmin Tool for Informix to monitor a single Informix server or a group of Informix servers. For example, you can view the online log and information about users such as commits, connection duration, and rows processed. You can find out which tables have the most inserts, updates, deletes, and scans. You can monitor a server's free memory, processor usage, I/O activity, the number of sessions, and more.

Mobile OAT is available for the iOS and Android operating systems. To use Mobile OAT, you must have OAT v3.11 installed, an Informix database server running, and an Internet connection. You can monitor Informix servers from version 11.10 and later.

For more information about downloading this mobile app, go to http://www.ibm.com/support/docview.wss?uid=swg27038075.

Log in to OAT with your user name

You can use SQL administration API commands to grant individual users privileges to administer database servers. Users who are granted these privileges can log in to the IBM OpenAdmin Tool (OAT) for Informix with their user name. By default, users must log in as user informix or the DBSA. Privilege groups identify which SQL administration API commands a user can run and determine which features the user can access in OAT. User informix or a user with the privilege to grant these privileges can grant the privileges on the Server Administration > User Privileges > SQL Admin API Privileges page.

For SQL administration API command privileges, OAT requires Informix 12.10.

For more information, view the online help.

Monitor multiple database servers at the same time

You can monitor multiple database servers at the same time with the IBM OpenAdmin Tool (OAT) for Informix from the OAT Group Summary dashboard. The dashboard displays all the database servers that are defined in an OAT group. You can view information about the server status, the number of alerts and errors, the processor, memory, and space usage. From the dashboard, you can link to other OAT pages for more detailed information. The dashboard is displayed on the Health Center > Dashboard > Group Summary page. You can also set the Group Summary page as your OAT home page.

For more information, view the online help.

View graphs of the buffer pools and extents for a database server

You can view graphs of the number of extents in databases or the percentage of cached pages in the buffer pools for databases in IBM OpenAdmin Tool (OAT) for Informix. Go to the Performance Analysis > System Reports > Graphs page. This type of graph is also called a heat map or a treemap.

For more information, view the online help.

Increased scalability with optimized caching

Cache access and management is optimized to provide faster performance for large systems that have many users. You can dynamically increase cache sizes in memory. You can view more information about caches and mutexes with onstat commands.

The sizes of memory caches are now twice the values that are set by the DS_POOLSIZE, PC_POOLSIZE, PLCY_POOLSIZE, or USRC_POOLSIZE configuration parameters. For example, if the DS_POOLSIZE configuration parameter is set to 127, 254 entries are allowed in the data distribution cache. If all entries in a cache are full, the cache size automatically grows by 10%. You can also dynamically increase cache sizes in memory.

You can view more information about caches and mutexes with onstat commands. You can view detailed information about memory caches by running the onstat –g cac command. The output can help you determine whether the cache is configured for optimal performance.

This feature is documented in the IBM Informix Administrator's Reference.

Store R-tree indexes in dbspaces with non-default page sizes

When you create an R-tree index, you can specify to store the index in a dbspace with a non-default page size. Previously, R-tree indexes required dbspaces with the default page size.

This feature is documented in the IBM Informix R-tree Index User’s Guide.

Faster queries through the virtual table interface

You can create an access method through the virtual table interface to process qualifiers to a WHERE clause that contain multiple column, constant, or expression parameters. Processing through a virtual table interface is generally faster than SQL processing. Previously, the virtual table interface processed only simple qualifiers that consisted of a Boolean operator that compared a column name and a constant. You can use the new parameter descriptor to describe multiple column, constant, or expression parameters to the qualifier clause.

This feature is documented in the IBM Informix Virtual-Table Interface Programmer's Guide.

Grid queries for consolidating data from multiple grid servers

You can write a grid query to select data from multiple servers in a grid. Use the new GRID clause in the SELECT statement to specify the servers on which to run the query. After the query is run, the results that are returned from each of the servers are consolidated.

This feature is documented in the IBM Informix Enterprise Replication Guide and the IBM Informix Guide to SQL: Syntax.

Easily propagate external files through a grid

You can propagate external files that are in a specific directory to other servers in the grid by running the ifx_grid_copy() procedure. For example, if a grid has 50 servers, you can copy an executable file from one server to the other 49 servers by running one procedure.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Manage grids in OAT

You can manage grids in IBM OpenAdmin Tool (OAT) for Informix if you have the Schema Manager plug-in and the Replication plug-in. You can manage grid tables and regions and run grid queries in OAT:
  • Add tables to a grid table list or remove tables from a grid table list, on the Schema Manager page of the Schema Manager plug-in. Grid tables must have the same column names and attributes on all grid servers. You can find out if a table is a grid table in the Information and Task Status panel for a selected table on the Schema Manager page.
  • A region is a subset of the servers in a grid. If the tables on the servers have the same column names and attributes, you can run grid queries against the servers in the region or grid. A grid query runs on each server in the region or the grid and the results are consolidated. To create, modify, or delete regions, go to the Replication > Grid page of the Replication plug-in.
  • Use Query By Example to run a grid query against all the servers in a grid or a region. Select the grid or the region in the SQL Toolbox > Query By Example page.

Grid queries require Informix 12.10.

For more information, view the online help.

Defer the propagation of DDL statements in a grid

You can run DDL statements in a grid context on a local server but defer the propagation of the DDL statements to the other grid servers. After you test the effects of the DDL statement, you can propagate the deferred DDL statements or remove them. You specify whether to defer the propagation of DDL statements in the ifx_grid_connect() procedure, and whether to enable Enterprise Replication for the deferred DDL statements.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicate tables without primary keys or ERKEY columns

Enterprise Replication requires a unique key to replicate data. Previously, Enterprise Replication required that the replicated table definition included a primary key or the ERKEY shadow columns. ERKEY columns require extra storage space. You can now specify the columns in a unique index as the replication key with the --key option, or allow Enterprise Replication to assign a primary key, ERKEY columns, or a unique index as the replication key with the --anyUniqueKey option.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Simplified setup of a data consolidation system

In a data consolidation system, multiple primary servers that contain different data replicate to one target server. The target server does not replicate any data. You can easily set up a data consolidation replication system by defining a replicate and specifying that the primary servers are participants that send only data. Previously, you would configure this type of data consolidation system by defining a different replicate for each primary server.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicates are mastered by default

By default, Enterprise Replication replicates are master replicates. If you do not specify a master server with the --master option, the master replicate is based on the first participant. A master replicate uses saved dictionary information about the attributes of replicated columns to verify that participants conform to the specified schema. To create a classic replicate, which does not verify the schemas of participants, include the --classic option in the cdr define replicate command.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicate time-series data

You can replicate time-series data with Enterprise Replication. For example, if you collect time-series data in multiple locations, you can consolidate the data to a central server.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Replicate light-append operations

Unlogged changes to a table, such as when data is added by a light append, can be replicated through Enterprise Replication. For example, you can use the express-load operation of the High-Performance Loader (HPL).

This feature is documented in the IBM Informix Enterprise Replication Guide.

Improved network failover support

You can configure the Connection Manager to monitor application network connections and to initiate failover when a network failure occurs. In earlier releases, the Connection Manager initiated failover only when the primary server failed.

This feature is documented in the IBM Informix Administrator’s Reference and the IBM Informix Administrator’s Guide.

Improved transactional consistency for HDR synchronization

Use improved HDR synchronization options to balance system performance and data protection in your high-availability cluster. Set the new HDR_TXN_SCOPE configuration parameter or environment option to choose between fully synchronous mode, asynchronous mode, or nearly synchronous mode. The three synchronization modes control when transaction commits are returned to client applications: after being processed on the primary server, after being sent to the HDR secondary server, or after being processed on the HDR secondary server. HDR synchronization can be set at the instance or session level.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Configuring log flow control for shared-disk secondary servers

You can limit log activity on the primary server so that shared-disk (SD) secondary servers in the cluster can catch up. This configuration can improve performance over congested or intermittent networks. You use the SDS_FLOW_CONTROL configuration parameter to set thresholds that start and stop flow control.

This feature is documented in the IBM Informix Administrator’s Guide and the IBM Informix Administrator’s Reference.

Reduce replication latency between Enterprise Replication and shared-disk secondary servers

If an Enterprise Replication server is a primary server for shared-disk secondary servers, you can reduce replication latency by reducing the number of transactions that are applied before the logs are flushed to disk. By default, the logs are flushed after 50 transactions are applied, or 5 seconds elapse. You can set the CDR_MAX_FLUSH_SIZE configuration parameter to 1 to flush the logs after every transaction and reduce replication latency.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Apply transactions for a replicate serially

You can specify to apply replicated transactions for a specific replicate serially. By default, replicated transactions are applied in parallel. If Enterprise Replication detects deadlock conditions, it automatically reduces the parallelism for the replication system until the problem is resolved. If you have a replicate that consistently reduces parallelism or your application requires serial processing, include the --serial option when you define or modify a replicate. By isolating a problematic replicate, you can improve the performance of the rest of the replication system. The onstat -g rcv full command displays the number of concurrent transactions and whether any replicate is preventing parallel processing.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Monitor the status of Enterprise Replication queues

You can check the status of Enterprise Replication queues by using the cdr check queue command. Check the queue status before you run a command that might have a dependency on a previously run command.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Enterprise Replication supported among non-root servers

You can replicate data among database servers that have non-root installations and that do not have a user informix account. The servers must have the same owner. Previously, Enterprise Replication required servers to connect as user informix.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Automatic space management for Enterprise Replication

If you have a storage pool, storage spaces are created automatically if needed when you define a replication server. Also, the CDR_DBSPACE and CDR_QDATA_SBSPACE configuration parameters are set automatically in the onconfig file. In earlier versions of Informix, you had to create the required spaces and set the configuration parameters before you could define a replication server.

This feature is documented in the IBM Informix Enterprise Replication Guide.

Continuously refresh the data in data marts

If your business requires that the data in your warehouse is always synchronized with the data in your database, you can configure the continuous refreshing of data in your data marts. This continuous synchronization process is sometimes referred to as trickle feed. You run the ifx_setupTrickleFeed() function to start refreshing the data in the data mart at the frequency that you specify.

You can refresh data when you choose in a data mart by running the ifx_refreshMart() function. The ifx_refreshMart() function checks for data changes in individual partitions and refreshes the changed partitions.

This change is documented in the IBM Informix Warehouse Accelerator Administration Guide.

Enhanced query acceleration

Informix Warehouse Accelerator was enhanced with the following capabilities:
  • When you combine multiple queries with UNION or UNION ALL operations, each of the queries is considered for acceleration and is accelerated if it qualifies.
  • You can use ANSI SQL Standard OLAP window functions and aggregates in your warehouse queries. When the query matches a data mart and qualifies to run in Informix Warehouse Accelerator, Informix uses the Informix Warehouse Accelerator for the underlying SELECT, JOIN, and PROJECT operations. The Informix database server then evaluates the OLAP expressions and the ORDER BY clause before returning the results to the application.
  • When you sort query results with the ORDER BY clause, you can use the new keyword options of NULLS FIRST or NULLS LAST to specify the NULL ordering. Earlier releases treated NULL as the lowest value and sorted it accordingly.

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide.

Administer Informix Warehouse Accelerator with IBM OpenAdmin Tool (OAT) for Informix

You can now use OAT to administer Informix Warehouse Accelerator. You can create an accelerator, monitor the status of an accelerator, and view the data marts that are associated with an accelerator on the OAT Warehouse Accelerator page.

You can create and load data marts and view the data marts that are associated with a database. After you load data in a data mart, you can refresh the table partitions that were changed since the last load and schedule a time frame and frequency to load the changed data. Alternatively, you can keep the data mart fact tables synchronized with the data in your database by setting up a schedule that continuously refreshes the data. To administer data marts, go to the SQL ToolBox > Schema Manager page.

For Informix Warehouse Accelerator, OAT requires Informix 12.10.

For more information, view the online help.

New SQL routines for administering the Informix Warehouse Accelerator

Informix Warehouse Accelerator includes new functions and procedures that you can use from any SQL client to perform administrative tasks. For example, you can use functions to create a data mart, to load a data mart, or to gather accelerator metrics.

The names of all Informix Warehouse Accelerator routines now start with the ifx_ prefix: for example, ifx_createMart().

This feature is documented in the IBM Informix Warehouse Accelerator Administration Guide.

Required privileges for administering Informix Warehouse Accelerator

To administer Informix Warehouse Accelerator with Informix Warehouse Accelerator SQL routines, Java classes, the IBM OpenAdmin Tool (OAT) for Informix or Informix Smart Analytics Optimizer Studio, a user must be user informix, have the DBA role, or have the WAREHOUSE privilege. Previously, any user was allowed to administer Informix Warehouse Accelerator. User informix, or a user with ADMIN or GRANT privilege for SQL administration API commands, can grant the WAREHOUSE privilege by running the admin() or task() function with the grant admin and WAREHOUSE arguments.

This change is documented in the IBM Informix Warehouse Accelerator Administration Guide.

Manage time-series data in rolling window containers

You can control the amount of time-series data that is stored in containers by specifying when to delete obsolete data. You create a rolling window container that has multiple partitions that are stored in multiple dbspaces. You configure a rolling window container to define the time interval for each partition and how many partitions are allowed: for example, 12 partitions that each store a month of data. When you insert data for a new month, a new partition is created, and if the number of partitions exceed the maximum that is allowed, the oldest partition becomes dormant. You specify when to destroy dormant partitions. Previously, you had to delete obsolete data manually.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Faster writing to time-series containers

By default, multiple sessions can now write to a time-series container simultaneously. However, you can limit the number of sessions to one. Data is loaded faster if only one session writes to the container. Use the TSContainerLock procedure to control whether multiple sessions are allowed. Previously, you wrote your application to prevent more than one session from writing to a container at one time.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Load time-series data faster by reducing logging

If you load time-series elements into containers in a single transaction, you can save time by specifying a reduced amount of logging. By default, every time-series element that you insert generates two log records: one for the inserted element and one for the page header update. However, you can specify that page header updates are logged for each transaction instead. For example, you can insert a set of daily meter readings for a meter in one transaction and reduce the amount of logging by almost half.

Run one or more of the PutElem, PutElemNoDups, PutNthElem, InsElem, BulkLoad, or PutTimeSeries functions with the TSOPEN_REDUCED_LOG (256) flag or the TSL_Flush function with the 257 flag within a transaction without other functions or SQL statements. If you insert data through a virtual table, run the TSCreateVirtualTab procedure with the TS_VTI_REDUCED_LOG (256) flag, and then insert data within a transaction without other types of statements.

This feature is documented in the IBM Informix TimeSeries Data User's Guide.

Write a custom program to load time-series data

You can use time-series SQL routines to write a custom program that loads time-series data into the database. You can load data in parallel in a highly efficient manner by controlling what data is loaded into which containers. You can include a custom loader program in your application.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.

Enhancements to the Informix TimeSeries Plug-in for Data Studio

When you use the Informix TimeSeries Plug-in for Data Studio, you can load time-series data into an database directly from another database. You do not have to export the data into a file. When you create a table definition, specify a connection to a database and a query to return the data that you want to load. You can preview the returned data to validate the query. You can also set other properties of the load job within the plug-in.

You can create a load job with the Informix TimeSeries Plug-in for Data Studio, and then you can run the load job from the command line. Running load jobs from the command line requires the Eclipse Platform Runtime Binary instead of Data Studio or the full Eclipse installation.

These features are documented in the IBM Informix TimeSeries Data User’s Guide.

Return the timestamp of the first or last time-series element

You can return the timestamp of the first or last element in a time series by running the GetFirstElementStamp function or the GetLastElementStamp function. You can choose whether the element can be null or must contain data. For example, you can return the first element that has data to determine the number of null elements between the origin and the first element that has data.

This feature is documented in the IBM Informix TimeSeries Data User’s Guide.