Before you start
About this series
Thinking about seeking certification on System Administration for IBM Informix version 11.70 (Exam 919)? If so, you've landed in the right spot to get started. This series of IBM Informix certification preparation tutorials covers all the topics you'll need to understand before you read that first exam question. Even if you're not seeking certification right away, this set of tutorials is a great place to start learning what's new in IBM Informix 11.70.
About this tutorial
In this tutorial, you'll learn about how to use the IBM Informix tools to create dbspaces, sbspaces, and temporal dbspaces. You'll learn to create and manage table fragmentation and to use storage-optimization features, such as compression. The material provided here primarily covers the objectives in Section 2 of the exam, entitled Space Management.
After completing this tutorial, you should be able to:
- Create dbspaces and sbspaces
- Add chunks to a dbspace or sbspace
- Rename dbspaces
- Mirror a dbspace or sbspace in chunks
- Use table fragmentation
- Use attach or detach to add or remove fragments to a fragmented table
- Use data compression to optimize storage
To understand the material presented in this tutorial, you must be familiar with the following:
- The IBM Informix environment (configuration file and parameters, installation, and administration)
- Database server commands (onstat, onmode, oncheck, dbschema)
- IBM Informix concepts and terminology (dbspaces, chunks, physical log, logical logs, checkpoint, and so on)
You do not need a copy of IBM Informix to complete this tutorial. However, you will get more out of the tutorial if you download the free trial version of Informix Innovator-C Edition (see Resources) to work along with this tutorial.
Configuring and managing storage space
A storage space is a physical area where IBM Informix stores data. There are different types of storage spaces available with IBM Informix, including dbspace, blob space, smart-blob space, and external space. You can create maximum of 32,766 storage spaces on the database server.
The following utilities help you manage storage spaces:
- onspaces utility
- SQL administration API commands
- OpenAdmin Tool (OAT)
A dbspace is a logical unit that can contain between 1 and 32,766 chunks. Place databases, tables, logical-log files, and the physical log in dbspaces.
dbspace that uses the default page size
To create a standard dbspace using onspaces, you must be logged in as
informix or as
root. On Windows, users in
the Informix-Admin group can create a dbspace. Ensure that the
database server is in either online, administration, or quiescent mode.
onspaces -c -d option to create a
dbspace or a temporary dbspace, as shown in Listing 1.
Listing 1. Create a dbspace
onspaces -c -d < dbspace_name> -p < pathname or drive> -o < offset> -s < size> -m--< pathname> < offset>
The maximum size of a dbspace is equal to the maximum number of chunks multiplied by the maximum size of a chunk. (The maximum number of chunks is 32,766 per instance. The maximum size of a chunk is equal to 2147483647 pages multiplied by the page size.)
Listing 2 shows how to create a 10 MB mirrored
dbspce1, with an offset of 5000 KB for both the primary and
mirror chunks; using default page size; and using raw disk space on
Listing 2. Example mirrored dbspace creation
onspaces -c -d dbspce1 -p /dev/raw_dev1 -o 5000 -s 10240 -m /dev/raw_dev2 5000
Listing 3 shows how to create a 5 MB dbspace,
with an offset of 200 KB from raw disk space (drive
Listing 3. Example dbspace creation
onspaces -c -d dbspc3 \\.\e: -o 200 -s 5120
dbspaces with non-default page size
You can specify a page size for a standard or temporary dbspace if you want a longer key length than is available for the default page size. The root dbspace is the default page size. If you want to specify a page size, the size must be an integral multiple of the default page size, and cannot be greater than 16 KB.
For systems with sufficient storage, the performance advantages of a larger page size include:
- Reduced depth of b-tree indexes, even for smaller index keys
- Decreased checkpoint time
You can gain additional performance advantages by doing the following:
- Group on the same page long rows that currently span multiple pages of the default page size
- Define a different page size for temporary tables so that the temporary tables have a separate buffer pool
You can use the BUFFERPOOL configuration parameter to create a buffer pool that corresponds to the page size of the dbspace. A table can be in one dbspace, and the index for that table can be in another dbspace. The page size for these partitions can be different. Complete the following steps to specify the page size for the dbspace.
- Use the
onmode -BCcommand to enable the large chunk mode if this mode is not enabled. By default, when IBM Informix is first initialized or restarted, IBM Informix starts with the large chunk mode enabled. For information about the onmode utility, see the IBM Informix Administrator's Reference (see Resources).
- Create a buffer pool that corresponds to the page size of the dbspace. You can use the onparams utility or the BUFFERPOOL configuration parameter. You should do this before you create the dbspace. If you create a dbspace with a page size that does not have a corresponding buffer pool, IBM Informix automatically creates a buffer pool using the default parameters defined in the onconfig configuration file. You cannot have multiple buffer pools with the same page size.
- Define the page size of the dbspace when you create the dbspace. You can use the onspaces utility or ON-Monitor. For example, if you create a dbspace with a page size of 6 KB, you must create a buffer pool with a size of 6 KB. If you do not specify a page size for the new buffer pool, IBM Informix uses the operating system default page size (4 KB on Windows and 2 KB on most UNIX platforms) as the default page size for the buffer pool.
To specify where to allocate the temporary files, create temporary dbspaces. To define temporary dbspaces, complete the following steps.
- Use the onspaces utility with the
-c -d -toptions.
- Use the DBSPACETEMP environment variables or the DBSPACETEMP configuration parameter to specify the dbspaces that the database server can use for temporary storage.
The DBSPACETEMP configuration parameter can contain dbspaces with a non-default page size. Although you can include dbspaces with different page sizes in the parameter list for DBSPACETEMP, the database server uses only dbspaces with the same page size as the first listed dbspace.
If you create more than one temporary dbspace, the dbspaces must be located on separate disks to optimize the I/O. Listing 4 shows how to create a 5 MB temporary dbspace named temp_space with an offset of 5000 KB.
Listing 4. Temporary dbspace
onspaces -c -t -d temp_space -p /dev/raw_dev1 -o 5000 -s 5120
An sbspace is a logical storage unit composed of one or more chunks that store smart large objects. Smart large objects consist of character large object (CLOB) and binary large object (BLOB) data types. User-defined data types can also use sbspaces.
Create the default sbspace
Complete the following steps to create a default sbspace.
- Set the ONCONFIG parameter SBSPACENAME to the name of your default
sbspace. You must update the ONCONFIG file before you start the database
server. Listing 5 shows how to name the default
Listing 5. Naming default sbspace sbsp1
SBSPACENAME sbsp1 # Default sbspace name
- Use the onspaces utility to create the sbspace. Listing 6
shows how to create an sbspace called sbsp1 in the partition
Listing 6. Creating an sbspace in the partition /dev/sbspace
onspaces -c -S sbsp1 -g 2 -p /dev/sbspace -o 0 -s 100000 -Df "LOGGING=ON"
The example sbspace has an initial offset of 0, a size of 100 MB, and logging is turned on.
You can use the
-Df option of the onspaces
utility to specify new default values for sbspace parameters. This
section shows how to log sbspaces and how to customize size
specifications for smart large objects stored in an sbspace.
Turning logging on
You must use logged sbspaces when you create an etx index; otherwise, the CREATE INDEX statement fails. The onspaces utility creates sbspaces with logging turned off by default to conserve resources. However, etx index users benefit from logging. If there is a power outage while an etx index is being updated, for example, the index might become corrupt. If the sbspace that holds the index has logging turned off, the changes up to the point of corruption cannot be backed out. In this case, to ensure index integrity, the index must be dropped and re-created. If logging is turned on, the etx index can be recovered normally.
Turn logging on by specifying
-Df "LOGGING=ON" when you create an sbspace
with the onspaces utility, as shown in Listing 7.
Listing 7. Turning logging on
onspaces -c -S sbsp1 -g 2 -p /dev/sbspace -o 0 -s 100000 -Df "LOGGING=ON"
Adding a chunk to a dbspace or sbspace
Add a chunk when a dbspace or sbspace is becoming full or requires more disk space. Use the onspaces utility to add a chunk. On most platforms, the maximum chunk size is 4 TB, but on other platforms, the maximum chunk size is 8 TB.
Note: The newly added chunk (and its associated mirror, if one exists) is available immediately. If you are adding a chunk to a mirrored storage space, you must also add a mirror chunk.
To add a chunk using onspaces on UNIX, you must be logged in as user
root. On Windows, users in the
Informix-Admin group can add a chunk. Ensure that the database server
is in online mode, administration mode, quiescent mode, or the cleanup phase
of fast-recovery mode.
Listing 8 adds a 10 MB mirror chunk to blobsp3. An offset of 200 KB for both the primary and mirror chunk is specified. If you are not adding a mirror chunk, you can omit the -m option.
Listing 8. Adding a mirror chunk
onspaces -a blobsp3 -p /dev/raw_dev1 -o 200 -s 10240 -m /dev/raw_dev2 200
Listing 9 adds a 5 MB chunk of raw disk space, at an offset of 5200 KB, to dbspace dbspc3.
Listing 9. Adding raw disk space
onspaces -a dbspc3 \\.\e: -o 5200 -s 5120
You can also define information that IBM Informix can use to automatically extend the size of a chunk when additional storage space is required for an application. If you have extendable chunks, you are not required to add new chunks or spend time trying to determine which storage space (dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace) will run out of space and when.
You can use the onspaces utility to rename a dbspace if you are user
informix or have DBA privileges and the database server is in
quiescent mode (and not any other mode).
To rename a dbspace use the following onspaces utility command, as shown in Listing 10.
Listing 10. Renaming a dbspace
onspaces -ren old_dbspace_name-n new_dbspace_name
You can rename standard dbspaces and all other spaces, including blobspaces, smart blobspaces, temporary spaces, and external spaces. However, you cannot rename any critical dbspace, such as a root dbspace or a dbspace that contains physical logs.
You can rename a dbspace or an sbspace when enterprise replication is enabled or on a primary database server when data replication is enabled. You cannot rename a dbspace or an sbspace on a secondary database server or when the secondary database server is part of the enterprise replication configuration.
Renaming a dbspace only changes the dbspace name; it does not reorganize the data. The rename dbspace command updates the dbspace name everywhere that name is stored. This includes reserved pages on disk, system catalogs, the ONCONFIG configuration file, and in-memory data structures.
Note: After renaming a dbspace, you should perform a level-0 archive of the renamed dbspace and the root dbspace.
Mirroring is a strategy that pairs a primary chunk of one defined dbspace, blobspace, or sbspace with an equal-sized mirror chunk.
Each write to the primary chunk is automatically accompanied by an identical write to the mirror chunk, as shown in Figure 1. If a failure occurs on the primary chunk, mirroring enables you to read from and write to the mirror chunk until you can recover the primary chunk, all without interrupting user access to data.
When you use disk mirroring, the database server writes data to two locations. Mirroring eliminates data loss due to storage device failures. If mirrored data becomes unavailable for any reason, the mirror of the data is available immediately and transparently to users.
Figure 1. Writing data to both the primary chunk and the mirror chunk
Creation of a mirror chunk
When you specify a mirror chunk, the database server copies all the data from the primary chunk to the mirror chunk. This copy process is known as recovery. Mirroring begins as soon as recovery is complete.
The recovery procedure that marks the beginning of mirroring is delayed if you start to mirror chunks within a dbspace that contains a logical-log file. Mirroring for dbspaces that contain a logical-log file does not begin until you create a level-0 backup of the root dbspace. The delay ensures that the database server can use the mirrored logical-log files if the primary chunk that contains these logical-log files becomes unavailable during a dbspace restore.
The level-0 backup copies the updated database server configuration information, including information about the new mirror chunk, from the root dbspace reserved pages to the backup. If you perform a data restore, the updated configuration information at the beginning of the backup directs the database server to look for the mirrored copies of the logical-log files if the primary chunk becomes unavailable. If this new storage-space backup information does not exist, the database server is unable to take advantage of the mirrored log files.
For similar reasons, you cannot mirror a dbspace that contains a logical-log file while a dbspace backup is being created. The new information that must be in the first block of the dbspace backup tape cannot be copied there after the backup has begun.
Listing 11 shows an example of adding a chunk with
mirror using the onspaces
command. Specify the mirror chunk path and offset with the
Listing 11. Mirror chunk path and offset
onspaces -a db_acct -p /dev/chunk2 -o 5000 -s 2500 -m /dev/mirror_chk2 5000
Benefits of mirroring
If media failure occurs, mirroring provides the database server administrator with a means of recovering data without taking the database server offline. This feature results in greater reliability and less system downtime. Furthermore, applications can continue to read from and write to a database whose primary chunks are on the affected media, provided that the chunks that mirror this data are located on separate media.
Costs of mirroring
Mirroring results in disk-space and performance costs. The disk-space cost is due to the additional space required for storing the mirror data. The performance cost results from performing writes to both the primary and mirror chunks. The use of multiple virtual processors for disk writes reduces this performance cost. Using split reads can actually cause performance to improve for read-only data. For a split read, the database server reads data from either the primary chunk or the mirror chunk, depending on the location of the data within the chunk.
When the database server recovers a mirror chunk, it performs the same recovery procedure that it uses when mirroring begins. The mirror-recovery process consists of copying the data from the existing online chunk onto the new, repaired chunk until the two are identical.
When you initiate recovery, the database server puts the disabled chunk in recovery mode and copies the information from the online chunk to the recovery chunk. When the recovery is complete, the chunk automatically receives online status. You can perform the same recovery steps whether you are recovering the primary chunk of a mirrored pair or recovering the mirror chunk.
Using data skip
One benefit that fragmentation provides is the ability to skip table fragments that are unavailable during an I/O operation. For example, a query can proceed even when a fragment is located on a chunk that is currently down as a result of a disk failure. When this situation occurs, a disk failure affects only a portion of the data in the fragmented table. By contrast, tables that are not fragmented can become completely inaccessible if they are located on a disk that fails.
To enable data skip, complete the following steps.
- Set the DATASKIP
parameter to OFF, ALL, or ON in the statement
DATASKIP ON dbspace_list. OFF means that the database server does not skip any fragments. If a fragment is unavailable, the query returns an error. ALL indicates that any unavailable fragment is skipped. ON dbspace_list instructs the database server to skip any fragments that are located in the specified dbspaces.
- Set the SET DATASKIP parameter to ON and specify the skipped
SET DATASKIP ON dbspace1, dbspace5. This statement causes the database server to skip dbspace1 or dbspace5 whenever the application attempts to access one of the dbspaces and the database server finds that the dbspace is unavailable. If the database server finds that both dbspace1 and dbspace5 are unavailable, it skips both dbspaces.
The DEFAULT setting for the SET DATASKIP statement enables a database server administrator to control the dataskip feature, as shown in Listing 12.
Listing 12. Setting the dataskip statement
SET DATASKIP DEFAULT
Using RAW tables
RAW tables are nonlogging, permanent tables that are similar to tables in a nonlogging database. RAW tables have the following characteristics.
- Update, insert, and delete operations on rows in a RAW table are supported, but the operations are not logged.
- You can define indexes on RAW tables, but they do not support primary key constraints or unique constraints.
- Light appends are not supported for loading RAW tables, except in high-performance loader (HPL) operations and in queries that specify INTO TEMP ... WITH NO LOG.
- A RAW table has the same attributes, whether it is stored in a logging database or in a nonlogging database.
- If you update a RAW table, you cannot reliably restore the data unless you perform a level-0 backup after the update. If the table has not been updated since that backup, you can restore the RAW table from the last physical backup, but backing up only the logical logs is not sufficient for a RAW table to be recoverable.
- Fast recovery can roll back incomplete transactions on STANDARD tables but not on RAW tables.
RAW tables are intended for the initial loading and validation of data. To load RAW tables, you can use any loading utility, including dbexport or the HPL in express mode. If an error or failure occurs while loading a RAW table, the resulting data is whatever was on the disk at the time of the failure.
- Do not use RAW tables within a transaction. After you have loaded the data, use the ALTER TABLE statement to change the table to type STANDARD and perform a level-0 backup before you use the table in a transaction.
- Do not use enterprise replication on RAW
or TEMP tables. There are some restrictions when using RAW tables in a
high-availability cluster environment. Because modifications made to
RAW tables are not logged, and because secondary servers (including
HDR, RSS and SDS) use log records to stay synchronized with the
primary server, you are restricted from performing the following operations
on RAW tables:
- On a primary server, RAW tables can be created, dropped, and
accessed; however, altering the table mode from unlogged to
logged, or from logged to unlogged, is not allowed. Altering a
table's mode in a high-availability cluster environment yields
- On secondary servers (HDR, SDS, or RSS), RAW tables are not
accessible for any operation. Attempting to access a RAW table
from SQL yields error code
- On a primary server, RAW tables can be created, dropped, and accessed; however, altering the table mode from unlogged to logged, or from logged to unlogged, is not allowed. Altering a table's mode in a high-availability cluster environment yields error code
Listing 13 shows the code to create a RAW table.
Listing 13. Creating a RAW table
create raw table t1 (col1 char(8));
Understanding table fragmentation and data storage
The fragmentation feature gives you additional control over where the database stores data. You are not limited to specifying the locations of individual tables and indexes. You can also specify the location of table and index fragments, which are parts of a table or index that are located on different storage spaces. You can fragment dbspaces or sbspaces.
Usually, you fragment a table when you initially create it. Listing 14 shows examples of two CREATE TABLE statements.
Listing 14. Creating a table
CREATE TABLE tablename ... FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3; CREATE TABLE tablename ...FRAGMENT BY EXPRESSION <Expression 1> in dbspace1, <Expression 2> in dbspace2, <Expression 3> in dbspace3;
The FRAGMENT BY ROUND ROBIN and FRAGMENT BY EXPRESSION keywords refer to two different distribution schemes. Both statements associate fragments with dbspaces.
When you fragment a table, you can also create multiple partitions of the table within the same dbspace, as shown in Listing 15.
Listing 15. Example to create multiple partitions within the same dbspace
CREATE TABLE tb1(a int) FRAGMENT BY EXPRESSION PARTITION part1 (a >=0 AND a < 5) in dbs1, PARTITION part2 (a >=5 AND a < 10) in dbs1 ... ;
Figure 2 illustrates the role of fragments in specifying the location of data.
Figure 2. dbspaces that link logical units (including table fragments) and physical units of storage
Configuring automatic space management
IBM Informix adds more storage space automatically whenever more space is required to avoid out-of-space errors and to reduce the time required to monitor when a storage space is running out of space. The automatic space management function allows the IBM Informix server to automatically add space, or you can manually expand a space or extend a chunk.
To configure automatic space management on the server, set the onconfig parameters SP_AUTOEXPAND, SP_THRESHOLD, and SP_WAITTIME. Then create a storage pool that contains the entries for available storage spaces (raw devices, cooked files, and directories) that the server uses to expand a storage space (dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace).
When the server expands a storage space, the server can add a chunk to the storage space. If the storage space is a non-mirrored dbspace or a temporary dbspace, the server can also extend a chunk in the storage space.
Using ONCONFIG parameters
To enable the server to automatically add space to an available storage space, configure the ONCONFIG parameters as shown in Table 1.
Table 1. Automatic space management configuration parameter
|SP_AUTOEXPAND||Enables or disables the automatic creation or extension of chunks|
|SP_THRESHOLD||Defines the minimum amount of free kilobytes that can exist in a storage space before the server automatically runs a task to expand the space, either by extending an existing chunk in the space or by adding a new chunk.|
|SP_WAITTIME||Specifies the maximum number of seconds that a thread waits for a dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace space to expand before returning an out-of-space error.|
Understanding the storage pool
Every IBM Informix database has a storage pool. The storage pool contains information about the directories, cooked files, and raw devices that the server can use if necessary to automatically expand an existing dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace.
When the storage space falls below a threshold defined in the SP_THRESHOLD configuration parameter, IBM Informix can automatically run a task that expands the space, either by extending an existing chunk in the space or by adding a new chunk.
You can use SQL administration API commands to do any of the following:
- Add, delete, or modify an entry that describes one directory, cooked file, or raw device in the storage pool. The server can use the specified information in the storage pool entry when necessary to automatically add space to an existing storage space.
- Control how a storage pool entry is used by modifying two different dbspace sizes that are associated with expanding a storage space, the extend size and the create size.
- Mark a chunk as extendable or not extendable.
- Immediately expand the size of a space when you do not want IBM Informix to automatically expand the space.
- Immediately extend the size of a chunk by a specified minimum amount.
- Create a storage space or chunk from an entry in the storage pool.
- Return empty space from a dropped storage space or chunk to the storage pool.
The storage pool table in the sysadmin database contains information about all of the entries in a storage pool for an IBM Informix server.
When the storage pool contains entries, you can also run SQL administration API commands to manually execute the following tasks:
- Expand the storage space or extend a chunk, when you do not want to wait for the task that automatically expands the space to run.
- Create storage spaces from storage pool entries and return space from empty storage spaces to the storage pool.
As an alternative to running SQL administration API commands, you can use the OpenAdmin Tool (OAT) as a graphical interface to configure the automatic and manual space management and to manage storage pool entries.
Managing storage pool entries
The admin() or task() function specifies a set of arguments along with one of the functions that manage the storage pool. Table 2 describes the arguments for each function.
Table 2. Arguments to use with the SQL admin() or task() function to manage storage pool entries
|storagepool add||Creates a storage pool entry|
EXECUTE FUNCTION task("storagepool add", "path", "begin_offset", "total_size", "chunk size",priority");
|storagepool modify||Modifies a storage pool entry|
EXECUTE FUNCTION task("storagepool modify", "storage_pool_entry_id", "new_total_size","new_chunk size", "new_priority");
|storagepool delete||Deletes a storage pool entry|
EXECUTE FUNCTION task("storagepool delete", "storage_pool_entry_id");
|storagepool purge all||Deletes all storage pool entries|
EXECUTE FUNCTION task("storagepool purge all");
|storagepool purge full||Deletes all storage pool entries that are full|
EXECUTE FUNCTION task("storagepool purge full");
|storagepool purge errors||Deletes storage pool entries that have errors|
EXECUTE FUNCTION task("storagepool purge errors");
The default units for storage pool sizes and offsets are KB. However, you can specify information in any of the following ways:
- "100000 K"
- "100 MB"
- "100 GB"
- "100 TB"
Listing 16 shows you how to add a new storage pool entry where the space is a directory named /inst/dbspaces with a beginning offset of 0, a total size of 0, an initial chunk size of 50 MB, and a high priority.
Listing 16. Example of adding a new storage pool entry
EXECUTE FUNCTION task("storagepool add", "/inst/dbspaces", "0", "0", "50000","1");
In this example, the offset of 0 and the total size of 0 are the only acceptable entries for a directory.
Listing 17 shows you how to change the total size, chunk size, and priority of storage pool entry 8 to 10 GB, 10 MB, and a medium priority.
Listing 17. Example of changing the attributes
EXECUTE FUNCTION task("storagepool modify", "8", "10 GB", "10000", "2");
Listing 18 shows you how to delete the storage pool entry with an entry ID of 7.
Listing 18. Example of deleting a storage pool entry
EXECUTE FUNCTION task("storagepool delete", "7");
Creating extendable chunks
Extendable chunks are chunks that IBM Informix can automatically extend or that you can manually extend when additional storage space is required for an application. If you have extendable chunks, you are not required to add new chunks or spend time trying to determine which storage space will run out of space and when it will run out of space. Configuring IBM Informix to automatically add more storage space prevents the error that can occur if a partition requires additional storage space and cannot find that space in one of the chunks in the space in which the partition is located.
An extendable chunk must be in a nonmirrored dbspace or temporary dbspace.
You can use an SQL administration API command with the modify space sp_sizes argument to modify the extend size and the create size for the space in which your extendable chunk is located using the syntax in Listing 19.
Listing 19. Modify the extend size
EXECUTE FUNCTION task("modify space sp_sizes","space_name", "new_create_size","new_extend_size");
Listing 20 sets the minimum create size to 60 MB and the minimum extend size to 10 MB for dbspace3.
Listing 20. Set minimum create size
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace3", "60000","10000");
Listing 21 sets the minimum create size to 20 percent and the minimum extend size to 1.5 percent for dbspace8.
Listing 21. Set minimum create size and minimum extend size
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace8", "20","1.5");
Marking a chunk as extendable or not extendable
You mark a chunk as extendable to enable the automatic or manual extension of the chunk. You can change the mark to not extendable to prevent the automatic or manual extension of the chunk. If a chunk is marked as not extendable, the server cannot automatically extend the chunk when there is little or no free space in the chunk. (However, if the storage pool contains entries, the server can expand a storage space by adding another chunk to the storage space.). Also, if a chunk is marked as not extendable, you cannot manually extend the size of the chunk.
An extendable chunk must be in an unmirrored dbspace or temporary dbspace.
To mark a chunk as extendable, run the admin() or task() function with the modify chunk extendable argument, as shown in Listing 22.
Listing 22. Enabling the modify chunk extendable argument
EXECUTE FUNCTION task("modify chunk extendable", "chunk number");
To mark a chunk as not extendable, run the admin() or task() function with the modify chunk extendable off argument, as shown in Listing 23.
Listing 23. Disabling the modify chunk extendable argument
EXECUTE FUNCTION task("modify chunk extendable off", "chunk number");
Listing 24 specifies that chunk 12 can be extended.
Listing 24. Extending chunk 12
EXECUTE FUNCTION task("modify chunk extendable", "12");
Changing the threshold and wait time for the automatic addition of more space
While IBM Informix can react to out-of-space conditions by automatically extending or adding chunks when a storage space is completely full, you can also configure the server to extend or add chunks before a storage space is completely full.
Specify a threshold for the minimum amount of free KB in a dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace. The threshold you define triggers a task that expands the space.
You can also use the SP_WAITTIME configuration parameter to specify the maximum number of seconds that a thread waits for a space to expand before returning an out-of-space error.
To change the threshold and wait time, complete one of the following steps:
- Change the value of the threshold specified in the SP_THRESHOLD configuration parameter from 0 (disabled) to a non-0 value. Specify a value from either 1 to 50 for a percentage of a value from 1000 to the maximum size of a chunk in kilobytes.
- Change the value of the SP_WAITTIME configuration parameter, which specifies the maximum number of seconds that a thread waits for a space to expand before returning an out-of-space error.
Configuring the frequency of the monitor low storage task
You can change the frequency of the mon_low_storage task defined in sysadmin database, which periodically scans the list of dbspaces to find spaces that fall below the threshold indicated by the SP_THRESHOLD configuration parameter. If the task finds spaces below the threshold, the task attempts to expand the space by extending an extendable chunk or by using the storage pool to add a chunk.
The default frequency of the mon_low_storage task is once per hour, but you can configure the task to run more or less frequently.
To configure the mon_low_storage task to run more or less frequently, run the SQL statements in Listing 25, where minutes is the number of minutes between each run.
Listing 25. Configuring mon_low_storage
DATABASE sysadmin; UPDATE ph_task set tk_frequency = INTERVAL (minutes) MINUTE TO MINUTE WHERE tk_name = "mon_low_storage";
For example, to configure the task to run every 10 minutes, run the SQL statements in Listing 26.
Listing 26. Run mon_low_storage every 10 minutes
DATABASE sysadmin; UPDATE ph_task set tk_frequency = INTERVAL (10) MINUTE TO MINUTE WHERE tk_name = "mon_low_storage";
Manually expanding a space or extending an extendable chunk
You can manually expand a space or extend a chunk when necessary, instead of waiting for IBM Informix to automatically expand the space or extend a chunk.
Before you start, remember the following:
- You can extend a chunk only if it is in an unmirrored dbspace or temporary dbspace.
- The chunk must be marked as extendable before it can be extended. If not, you must run the admin() or task() function with the modify chunk extendable argument to specify that the chunk is extendable.
- If a space cannot be expanded by extending a chunk, the storage pool must contain active entries that the server can use to create new chunks.
To immediately increase your storage space, use one of the following methods:
- Manually expand a space by running the admin() or task() function
with the modify space expand argument, as shown in Listing 27.
Listing 27. Use modify space expand argument
EXECUTE FUNCTION task("modify space expand", "space_name","size");
For example, Listing 28 expands space number 8 by 1 GB.
Listing 28. Expand space 8 by 1 GB
EXECUTE FUNCTION task("modify space expand", "8", "1000000");
The server expands the space either by extending a chunk in the space or by adding a new chunk. The server might round the requested size up, depending on the page size of the storage space and the configured chunk size for any storage pool entry used during the expansion.
- Manually extend a chunk by running the admin() or task() function
with the modify chunk extend argument, shown in Listing 29.
Listing 29. Use modify chunk extend argument
EXECUTE FUNCTION task("modify chunk extend", "chunk_number","extend_amount");
For example, Listing 30 extends chunk number 12 by 5000 KB.
Listing 30. Extend chunk 12 by 5000 KB
EXECUTE FUNCTION task("modify chunk extend", "12", "5000");
The server might round the requested size up, depending on the page size of the storage space.
Understanding the storagepool table
The storagepool table in the sysadmin database contains information about all of the entries in the storage pool in an IBM Informix instance. Each entry represents free space that the server can use when automatically expanding a storage space.
Fragmentation is a database server feature that allows you to control where data is stored at the table level. Fragmentation enables you to define groups of rows, smart large objects, or index keys within a table according to some algorithm or scheme.
The scheme that you use to group rows or index keys into fragments is called the distribution scheme. The distribution scheme and the set of dbspaces in which you locate the fragments together make up the fragmentation strategy. After you decide whether to fragment table rows, index keys, or both, and you decide how the rows or keys should be distributed over fragments, you can decide on a scheme to implement this distribution.
When you create fragmented tables and indexes, the database server stores the location of each table and index fragment with other related information in the system catalog table named sysfragments. You can use this table to access information about your fragmented tables and indexes. If you use a user-defined routine as part of the fragmentation expression, that information is recorded in sysfragexprudrdep.
From the perspective of an end user or client application, a fragmented table is identical to a nonfragmented table. Client applications do not require any modifications to allow them to access the data in fragmented tables.
Consider fragmenting your tables if improving at least one of the following is your goal:
- Single-user response time
- Backup-and-restore characteristics
- Loading of data
Fragmented tables can belong to either a logging database or a nonlogging database. As with nonfragmented tables, if a fragmented table is part of a nonlogging database, a potential for data inconsistencies arises if a failure occurs.
Exploring distribution schemes for table fragmentation
A distribution scheme is a method that the database server uses to distribute rows or index entries to fragments. IBM Informix database servers support the following distribution schemes:
- Expression-based range rule
- Expression-based arbitrary rule
- MOD function
- Insert and update rows
This distribution scheme puts rows that contain specified values in the same fragment. You can specify a fragmentation expression that defines criteria for assigning a set of rows to each fragment, either as a range rule or some arbitrary rule. You can specify a remainder fragment that holds all rows that do not match the criteria for any other fragment, although a remainder fragment reduces the efficiency of the expression-based distribution scheme.
To specify an expression-based distribution scheme, use the FRAGMENT BY EXPRESSION clause of the CREATE TABLE or CREATE INDEX statement, as shown in Listing 31.
Listing 31. Example to use the FRAGMENT BY EXPRESSION clause
CREATE TABLE accounts (id_num INT, name char(15)) FRAGMENT BY EXPRESSION Id_num <= 100 IN dbspace_1, id_num < 100 AND id_num <= 200 IN dbspace_2, Id_num > 200 IN dbspace_3
When you use the FRAGMENT BY EXPRESSION clause of the CREATE TABLE statement to create a fragmented table, you must supply one condition for each fragment of the table that you are creating.
You can define range rules or arbitrary rules that indicate to the database server how rows are to be distributed to fragments.
Expression-based range rule
A range rule uses SQL relational and logical operators to define the boundaries of each fragment in a table. A range rule can contain the following restricted set of operators:
- The relational operators >, <, >=, <=
- The logical operators AND and OR
- Algebraic expressions, including built-in functions
A range rule can be based on a simple algebraic expression as shown in Listing 32. In this example, the expression is a simple reference to a column.
Listing 32. Example to use the FRAGMENT BY EXPRESSION clause with range rule
FRAGMENT BY EXPRESSION id_num > 0 AND id_num <= 20 IN dbsp1, id_num > 20 AND id_num <= 40 IN dbsp2, Id_num > 40 IN dbsp3
The expression in a range rule can be a conjunction or disjunction of
more algebraic expressions. Listing 33 shows two algebraic
expressions used to define two sets of ranges. The first set of ranges
is based on the algebraic expression
YEAR(Died) - YEAR(Born). The
second set of ranges is based on
Listing 33. Example showing two algebraic expressions used to define two sets of ranges
FRAGMENT BY EXPRESSION YEAR (Died) - YEAR(Born) < 21 AND MONTH(Born) >= 1 AND MONTH(Born) < 4 IN dbsp1, YEAR(Died) - YEAR(Born) < 40 AND MONTH(Born) >= 4 AND MONTH(Born) < 7 IN dbsp2,
Expression-based arbitrary rule
An arbitrary rule uses SQL relational and logical operators. Unlike range rules, arbitrary rules allow you to use any relational operator and any logical operator to define the rule. In addition, you can reference any number of table columns in the rule. Arbitrary rules typically include the use of the OR logical operator to group data, as shown in Listing 34.
Listing 34. Example showing the usage of the OR logical operator to group data in ranges
FRAGMENT BY EXPRESSION zip_num = 95228 OR zip_num = 95443 IN dbsp2, zip_num = 91120 OR zip_num = 92310 IN dbsp4, REMAINDER IN dbsp5
You can use the MOD function in a FRAGMENT BY EXPRESSION clause to map each row in a table to a set of integers (hash values). The database server uses these values to determine in which fragment it will store a given row. Listing 35 shows how you might use the MOD function in an expression-based distribution scheme.
Listing 35. Example showing the usage of MOD function
FRAGMENT BY EXPRESSION MOD(id_num, 3) = 0 IN dbsp1, MOD(id_num, 3) = 1 IN dbsp2, MOD(id_num, 3) = 2 IN dbsp3
Insert and update rows
When you insert or update a row, the database server evaluates fragment expressions, in the order specified, to see if the row belongs in any of the fragments. If so, the database server inserts or updates the row in one of the fragments. If the row does not belong in any of the fragments, the row is put into the fragment that the remainder clause specified. If the distribution scheme does not include a remainder clause and if the row does not match the criteria for any of the existing fragment expressions, the database server returns an error.
This distribution scheme places rows one after another in fragments, rotating through the series of fragments to distribute the rows evenly. The database server defines the rule internally.
For INSERT statements, the database server uses a hash function on a random number to determine the fragment in which to place the row. For INSERT cursors, the database server places the first row in a random fragment, the second in the next sequential fragment, and so on. If one of the fragments is full, it is skipped.
To specify a round-robin distribution scheme, use the FRAGMENT BY ROUND ROBIN clause of the CREATE TABLE statement. Listing 36 shows a fragmented table with a round-robin distribution scheme.
Listing 36. A round-robin distribution scheme
CREATE TABLE account_2 ... ... FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3
When the database server receives a request to insert a number of rows into a table that uses round-robin distribution, it distributes the rows in such a way that the number of rows in each of the fragments remains approximately the same. Round-robin distributions are also called even distributions because information is distributed evenly among the fragments. The rule for distributing rows to tables that use round-robin distribution is internal to the database server.
Note: You can use the round-robin distribution scheme only for table fragmentation. You cannot fragment an index with this distribution scheme.
Creating a fragmented table
After you determine the appropriate distribution scheme for table fragmentation, you can fragment a table at the same time that you create it, or you can fragment existing nonfragmented tables.
To create a fragmented table, use the FRAGMENT BY clause of the CREATE TABLE statement. Listing 37 shows you how to create a fragmented table similar to the orders table of the stores_demo database. The example uses a round-robin distribution scheme with three fragments. Consult with your database server administrator to set up three dbspaces: one for each of the fragments dbspace1, dbspace2, and dbspace3.
Listing 37. Example showing the creation of a fragmented table
CREATE TABLE my_orders ( order_num SERIAL(1001), order_date DATE, customer_num INT, ship_instruct CHAR(40), backlog CHAR(1), po_num CHAR(10), ship_date DATE, ship_weight DECIMAL(8,2), ship_charge MONEY(6), paid_date DATE, PRIMARY KEY (order_num), FOREIGN KEY (customer_num) REFERENCES customer(customer_num)) FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3
Listing 38 shows how to create the same table with expression-based fragmentation.
Listing 38. Example creating a fragmented table with expression-based fragmentation
CREATE TABLE my_orders (order_num SERIAL, ...) FRAGMENT BY EXPRESSION order_num < 10000 IN dbspace1, order_num >= 10000 and order_num < 20000 IN dbspace2, order_num >= 20000 IN dbspace3
Create a fragmented table from nonfragmented tables
You might be required to convert nonfragmented tables into fragmented tables in the following circumstances:
- You have an application-implemented version of table fragmentation.
- You want to convert several small tables into one large fragmented table.
- You have an existing large table that you want to fragment.
Remember that before you perform the conversion, you must set up an appropriate number of dbspaces to contain the newly created fragmented tables.
Create more than one nonfragmented table
You can combine two or more nonfragmented tables into a single fragmented table. The nonfragmented tables must have identical table structures and must be stored in separate dbspaces. To combine nonfragmented tables, use the ATTACH clause of the ALTER FRAGMENT statement.
For example, suppose that you have three nonfragmented tables account1, account2, and account3, and that you store the tables in dbspaces dbspace1, dbspace2, and dbspace3, respectively. All three tables have identical structures, and you want to combine the three tables into one table that is fragmented by the expression on the common column acc_num. You want rows with acc_num less than or equal to 1120 to be stored in dbspace1. Rows with acc_num greater than 1120 but less than or equal to 2000 are to be stored in dbspace2. Finally, rows with acc_num greater than 2000 are to be stored in dbspace3. Listing 39 shows how to code this.
Listing 39. Using the fragmentation strategy to fragment the tables
ALTER FRAGMENT ON TABLE tab1 ATTACH tab1 AS acc_num <=1120, tab2 AS acc_num > 1120 and acc_num <= 2000, tab3 AS acc_num > 2000;
The result is a single table, tab1. The other tables, tab2 and tab3, were consumed, and they no longer exist.
Using a single, nonfragmented table
To create a fragmented table from a nonfragmented table, use the INIT clause of the ALTER FRAGMENT statement. For example, suppose you want to convert the table orders to a table fragmented by round-robin. The SQL statement in Listing 40 performs the conversion.
Listing 40. Convert table orders
ALTER FRAGMENT ON TABLE orders INIT FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3;
Any existing indexes on the nonfragmented table become fragmented with the same fragmentation strategy as the table.
Rowids in a fragmented table
The term rowid refers to an integer that defines the physical location of a row. The rowid of a row in a nonfragmented table is a unique and constant value. By contrast, rows in fragmented tables are not assigned rowids.
Note: Use primary keys as a method of access in your applications rather than rowids. Because primary keys are defined in the ANSI specification of SQL, using primary keys to access data makes your applications more portable.
To accommodate applications that must reference a rowid for a fragmented table, you can explicitly create a rowid column for a fragmented table. However, you cannot use the WITH ROWIDS clause for typed tables.
To create the rowid column, use one of the following SQL syntaxes:
- The WITH ROWIDS clause of the CREATE TABLE statement
- The ADD ROWIDS clause of the ALTER TABLE statement
- The INIT clause of the ALTER FRAGMENT statement
When you create the rowid column, the database server takes the following actions:
- Adds the 4-byte unique value to each row in the table
- Creates an internal index that it uses to access the data in the table by rowid
- Inserts a row in the sysfragments system catalog table for the internal index
Fragmenting smart large objects
You can specify multiple sbspaces in the PUT clause of the CREATE TABLE statement to achieve round-robin fragmentation of smart large objects on a column. If you specify multiple sbspaces for a CLOB or BLOB column, the database server distributes the smart large objects for the column to the specified sbspaces in round-robin fashion. Given the following CREATE TABLE statement, the database server can distribute large objects from the cat_photo column to sbcat1, sbcat2, and sbcat3 in round-robin fashion. Listing 41 shows the code.
Listing 41. Round-robin fragmentation of smart large objects on a column
CREATE TABLE catalog ( catalog_num SERIAL, stock_num SMALLINT, manu_code CHAR(3), cat_descr LVARCHAR, cat_photo BLOB) PUT cat_photo in (sbcat1,sbcat2, sbcat3);
Modifying your fragmentation strategies
You can make two general types of modifications to a fragmented table. The first type consists of the modifications that you can make to a nonfragmented table. Such modifications include adding a column, dropping a column, changing a column data type, and so on. For these modifications, use the ALTER TABLE statements that you would normally use on a nonfragmented table. The second type of modification consists of changes to a fragmentation strategy.
At times, you might be required to alter a fragmentation strategy after you implement fragmentation. Most frequently, you will be required to modify your fragmentation strategy when you use fragmentation with intraquery or interquery parallelization. Modifying your fragmentation strategy in these circumstances is one of several ways you can improve the performance of your database server system. See the next section for more details on how to modify your fragmentation strategy.
Reinitializing a fragmentation strategy
You can use the ALTER FRAGMENT statement with an INIT clause to define and initialize a new fragmentation strategy on a nonfragmented table, or you can convert an existing fragmentation strategy on a fragmented table. You can also use the INIT clause to change the order of evaluation of fragment expressions.
Listing 42 shows how you might use the INIT clause to reinitialize a fragmentation strategy completely.
Listing 42. Initial creation of a fragmented table
CREATE TABLE account (acc_num INTEGER, ...) FRAGMENT BY EXPRESSION acc_num <= 1120 in dbspace1, acc_num > 1120 and acc_num < 2000 in dbspace2, REMAINDER IN dbspace3;
Suppose that after several months of using this distribution scheme, you find that the number of rows in the fragment contained in dbspace2 is twice the number of rows that the other two fragments contain. This imbalance causes the disk that contains dbspace2 to become an I/O bottleneck.
To remedy this situation, you decide to modify the distribution so that the number of rows in each fragment is approximately even. You want to modify the distribution scheme so that it contains four fragments instead of three fragments. A new dbspace, dbspace2a, is to contain the new fragment that stores the first half of the rows that previously were contained in dbspace2. The fragment in dbspace2 contains the second half of the rows that it previously stored.
To implement the new distribution scheme, first create the dbspace dbspace2a, and then use the code in Listing 43.
Listing 43. Implementing a new distribution scheme
ALTER FRAGMENT ON TABLE account INIT FRAGMENT BY EXPRESSION acc_num <= 1120 in dbspace1, acc_num > 1120 and acc_num <= 1500 in dbspace2a, acc_num > 1500 and acc_num < 2000 in dbspace2, REMAINDER IN dbspace3;
As soon as you execute this statement, the database server discards the old fragmentation strategy, and the rows that the table contains are redistributed according to the new fragmentation strategy.
You can also use the INIT clause of ALTER FRAGMENT to perform the following actions:
- Convert a single, nonfragmented table into a fragmented table
- Convert a fragmented table into a nonfragmented table
- Convert a table fragmented by any strategy to any other fragmentation strategy
Modifying fragmentation strategies
You can use the ADD, DROP, and MODIFY clauses to change the fragmentation strategy on a table or index.
The ADD clause
When you define a fragmentation strategy, you might be required to add one or more fragments. You can use the ADD clause of the ALTER FRAGMENT statement to add a new fragment to a table. Suppose that you want to add a fragment to a table that you create with the statement in Listing 44.
Listing 44. Adding a clause
CREATE TABLE sales (acc_num INT, ...) FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3;
To add a new fragment dbspace4 to the table sales, use the command shown in Listing 45.
Listing 45. Adding a new fragment
ALTER FRAGMENT ON TABLE sales ADD dbspace4;
If the fragmentation strategy is expression based, the ADD clause of ALTER FRAGMENT contains options to add a dbspace before or after an existing dbspace.
The DROP clause
When you define a fragmentation strategy, you might drop one or more fragments. With IBM Informix, you can use the DROP clause of the ALTER FRAGMENT ON TABLE statement to drop a fragment from a table. Suppose you want to drop a fragment from a table that you create with the statement in Listing 46.
Listing 46. Dropping a fragment
CREATE TABLE sales (col_a INT), ...) FRAGMENT BY ROUND ROBIN IN dbspace1, dbspace2, dbspace3;
The ALTER FRAGMENT statement Listing 47 uses a DROP clause to drop the third fragment dbspace3 from the sales table.
Listing 47. Dropping the third fragment
ALTER FRAGMENT ON TABLE sales DROP dbspace3;
When you issue this statement, all the rows in dbspace3 are moved to the remaining dbspaces: dbspace1 and dbspace2.
The MODIFY clause
Use the ALTER FRAGMENT statement with the MODIFY clause to modify one or more of the expressions in an existing fragmentation strategy. Suppose you initially create the fragmented table in Listing 48.
Listing 48. Creating a fragmented table
CREATE TABLE account (acc_num INT, ...) FRAGMENT BY EXPRESSION acc_num <= 1120 IN dbspace1, acc_num > 1120 AND acc_num < 2000 IN dbspace2, REMAINDER IN dbspace3;
When you execute the ALTER FRAGMENT statement in Listing 49, you ensure that no account numbers with a value less than or equal to 0 are stored in the fragment that dbspace1 contains.
Listing 49. Altering fragment example
ALTER FRAGMENT ON TABLE account MODIFY dbspace1 TO acc_num > 0 AND acc_num <= 1120;
You cannot use the MODIFY clause to alter the number of fragments that your distribution scheme contains. Use the INIT or ADD clause of ALTER FRAGMENT instead.
Granting and revoking privileges on fragments
You must have a strategy to control data distribution if you want to grant useful fragment privileges. One effective strategy is to fragment data records by expression. The round-robin data-record distribution strategy, however, is not a useful strategy, because each new data record is added to the next fragment. A round-robin distribution nullifies any clean method of tracking data distribution and therefore eliminates any real use of fragment authority. Because of this difference between expression-based distribution and round-robin distribution, the GRANT FRAGMENT and REVOKE FRAGMENT statements apply only to tables that have expression-based fragmentation.
When you create a fragmented table, no default fragment authority exists. Use the GRANT FRAGMENT statement to grant insert, update, or delete authority on one or more of the fragments. If you want to grant all three privileges simultaneously, use the ALL keyword of the GRANT FRAGMENT statement. Remember that you cannot grant fragment privileges by merely naming the table that contains the fragments. You must name the specific fragments.
When you want to revoke insert, update, or delete privileges, use the REVOKE FRAGMENT statement. This statement revokes privileges from one or more users on one or more fragments of a fragmented table. If you want to revoke all privileges that currently exist for a table, you can use the ALL keyword. If you do not specify any fragments in the command, the permissions being revoked apply to all fragments in the table that currently have permissions.
Attaching or detaching fragments
You can use ALTER FRAGMENT ATTACH and DETACH statements to perform fragmentation operations. ALTER FRAGMENT ATTACH provides a way to combine two nonfragmented tables or add fragments to a fragmented table. ALTER FRAGMENT DETACH provides a way to delete a fragment of a fragmented table.
Use the ATTACH clause of the ALTER FRAGMENT ON TABLE statement to combine tables that have identical structures into a fragmentation strategy. The clause also provides a way to load large amounts of data into an existing table incrementally by taking advantage of the fragmentation technology.
Any tables that you attach must have been created previously in separate partitions. You cannot attach the same table more than once. All consumed tables listed in the ATTACH clause must have the same structure as the surviving table. The number, names, data types, and relative position of the columns must be identical.
User-defined routines and references to fields of a ROW-type column are not valid. You cannot attach a fragmented table to another fragmented table.
All of the dbspaces that store the fragments must have the same page size.
An ATTACH operation on two nonfragmented tables cannot produce a surviving table that is fragmented by interval or by list. (If you want to attach two nonfragmented tables, use the INIT option of ALTER FRAGMENT to define an interval or list fragmentation scheme for one of the nonfragmented tables, and then use the ATTACH option to attach the second table to it.)
For surviving tables that are fragmented by interval, the following restrictions apply:
- Because the database server determines the ordinal position of interval fragments, BEFORE and AFTER specifications are not valid.
- You cannot attach a fragment whose expression matches an existing interval fragment expression.
- While you attach fragments above the transition value, the upper limit of the fragment being attached must align at an interval fragment boundary. That is, the upper limit of the fragment must equal the transition value plus an integer multiple of the interval value.
For fragmented tables that are protected by a security policy, attaching a fragment to the table fails if any of the following conditions are not satisfied:
- The source table and the target table are both protected by the same security policy.
- Both tables have the same protection granularity (either row-level, column-level, or both row-level and column-level).
- In both tables, the same set of protected columns is protected by the same security labels. If there is more than one protected column, there can be more than one security label in each table, but the same label must protect the same column in both tables.
If the ATTACH operation fails because one or more of these conditions are not satisfied, you can use the ALTER TABLE statement to make the schemas of the two tables identical, and then repeat the ALTER FRAGMENT ATTACH statement on the modified tables.
When you transform tables with identical table structures into fragments in a single table, you allow the database server to manage the fragmentation instead of allowing the application to manage the fragmentation. The distribution scheme can be round-robin or expression based.
To make a single, fragmented table from two or more identically-structured, nonfragmented tables, the ATTACH clause must contain the surviving table in the attach list. The attach list is the list of tables in the ATTACH clause.
To include a rowid column in the newly created, single, fragmented table, attach all tables first, and then add the rowid with the ALTER TABLE statement.
Attaching a table to a fragmented table
To attach a nonfragmented table to an already fragmented table, the nonfragmented table must have been created in a separate dbspace and must have the same table structure as the fragmented table.
When you attach one or more tables to a fragmented table, a consumed_table must be nonfragmented.
Using the ONLINE keyword in ATTACH operations.
The ONLINE keyword instructs the database server to commit the ALTER FRAGMENT ATTACH work internally, if there are no errors, and to apply an intent exclusive lock to the surviving table, rather than an exclusive lock. An exclusive lock is applied to the consumed table, which must be a nonfragmented table.
Indexes after an ATTACH
A detached index on the surviving table retains its same fragmentation strategy. That is, a detached index does not automatically adjust to accommodate the new fragmentation of the surviving table.
In a database that supports transaction logging, an ATTACH operation extends any attached index on the surviving table according to the new fragmentation strategy of the surviving table. All rows in the consumed table are subject to these automatically adjusted indexes.
In a nonlogging database of IBM Informix, an ATTACH operation does not extend indexes on the surviving table according to the new fragmentation strategy of the surviving table. To extend the fragmentation strategy of an attached index according to the new fragmentation strategy of the surviving table, you must drop the index and re-create it on the surviving table.
Some ALTER FRAGMENT ... ATTACH operations can cause the database server to update the index structure. When an index is rebuilt in those cases, the associated column distribution is automatically recalculated, and it is available to the query optimizer when it designs query plans for the table on which the fragment was attached.
For an indexed column (or a set of columns) on which ALTER FRAGMENT ... ATTACH automatically rebuilds a B-tree index, the recalculated column distribution statistics are equivalent to distributions created by the UPDATE STATISTICS statement in HIGH mode.
If the rebuilt index is not a B-tree index, the automatically recalculated statistics correspond to distributions created by the UPDATE STATISTICS statement in LOW mode.
Requirements for ONLINE ATTACH operations
You can use the ATTACH option on the ALTER FRAGMENT ONLINE ON TABLE statement only if the surviving table is fragmented by an interval fragmentation scheme. The consumed table must be nonfragmented.
All indexes on the surviving table must have the same fragmentation scheme as the table. (That is, any indexes must be attached.) For this reason, if there is a primary key constraint or other referential constraints on the table, it is recommended that you first create an attached index for the constraint, and then use the ALTER TABLE statement to add the constraint. (By default, system-created indexes for primary key constraints and for other referential constraints are detached.)
For each index on the surviving table, there must be a matching index on the same set of columns of the consumed table. The matching indexes on the consumed table will be recycled as index fragments on the surviving table during the ATTACH operation. Any additional indexes on the consumed table are dropped during the ATTACH operation. The indexes on the consumed table that will be recycled must each be detached in a single dbspace, and the dbspace that stores the recycled index must be the same dbspace that stores the consumed table.
If the index on the surviving table is unique, the corresponding matching index on the consumed table must also be unique.
The consumed table must have a check constraint that satisfies both of the following conditions:
- It must exactly match the expression for the fragment that is being attached.
- It must span a single interval only.
The requirement that rows in the consumed table span only a single interval within the range interval fragmentation scheme of the surviving table is necessary to prevent data movement. Data movement is not allowed in ALTER FRAGMENT ATTACH operations that include the ONLINE keyword.
Only one consumed table can be specified in the ONLINE ATTACH operation.
The SQL statements in Listing 50 define a fragmented employee table that uses a range-interval storage distribution scheme with a unique index employee_id_idx on the column emp_id (that is also the fragmentation key) and another index employee_dept_idx on the column dept_id.
Listing 50. Example of ALTER FRAGMENT ONLINE ATTACH
CREATE TABLE employee (emp_id INTEGER, name CHAR(32), dept_id CHAR(2), mgr_id INTEGER, ssn CHAR(12)) FRAGMENT BY RANGE (emp_id) INTERVAL (100) STORE IN (dbs1, dbs2, dbs3, dbs4) PARTITION p0 VALUES < 200 IN dbs1, PARTITION p1 VALUES < 400 IN dbs2; CREATE UNIQUE INDEX employee_id_idx ON employee(emp_id); CREATE INDEX employee_dept_idx ON employee(dept_id);
The last two statements insert rows with fragment key values above the upper limit of the transition fragment, causing the database server to generate two new interval fragments, so that the resulting fragment list consists of the fragments shown in Listing 51.
Listing 51. Fragments in surviving table before ALTER FRAGMENT ONLINE
p0 VALUES < 200 - range fragment p1 VALUES < 400 - range fragment (transition fragment) sys_p2 VALUES >= 400 AND VALUES < 500 - interval fragment sys_p4 VALUES >= 600 AND VALUES < 700 - interval fragment
The SQL statements in Listing 52 define a nonfragmented employee2 table with the same column schema as the employee table and with single-column indexes on the two corresponding columns (emp_id and dept_id) that were indexed in the employee table. The statements also define a unique index employee2_ssn_idx on the column emp_ssn and another index employee_dept_idx on the column name. All four of these indexes are stored in the dbspace dbs4. The CREATE TABLE statement also specifies a check constraint ((emp_id >= 500 AND emp_id <600)) that exactly matches the fragment expression for a consumed table that will be attached and that exactly spans a single interval of the range interval fragmentation scheme for the employee table.
Listing 52. Example with same column schema and with single-column indexes
CREATE TABLE employee2 (emp_id INTEGER, name CHAR(32), dept_id CHAR(2), mgr_id INTEGER, ssn CHAR(12), CHECK (emp_id >=500 AND emp_id < 600)) in dbs4; CREATE UNIQUE INDEX employee2_id_idx ON employee2(emp_id) in dbs4; CREATE INDEX employee2_dept_idx ON employee2(dept_id) in dbs4; CREATE UNIQUE INDEX employee2_ssn_idx ON employee2(ssn) in dbs4; CREATE INDEX employee2_name_idx ON employee2(name) in dbs4;
The statement in Listing 53 returns an error because the fragment being attached is a range fragment (a fragment that stores rows with fragmentation key values below the transition value of 400 for the employee table). Only interval fragments can be attached online.
Listing 53. Using a range fragment
ALTER FRAGMENT ONLINE ON TABLE employee ATTACH employee2 AS PARTITION p3 VALUES < 300;
The statement in Listing 54 runs successfully and creates a new p3 interval fragment.
Listing 54. Create a p3 interval
ALTER FRAGMENT ONLINE ON TABLE employee ATTACH employee2 AS PARTITION p3 VALUES < 600;
Listing 55 shows the results after the statement is run.
Listing 55. Fragments in surviving table after ALTER FRAGMENT ONLINE
p0 VALUES < 200 - range fragment p1 VALUES < 400 - range fragment sys_p2 VALUES >=400 AND VALUES <500 - interval fragment sys_p3 VALUES >= 500 AND VALUES < 600 - interval fragment sys_p4 VALUES >= 600 AND VALUES < 700 - interval fragment
Note that the successful ALTER FRAGMENT ONLINE . . . ATTACH operation in Listing 55 required multiple correspondences among specifications in the DDL statements that defined the surviving and consumed tables, including their columns, indexes, constraints, index storage location, and the interval fragmentation strategy of the surviving table. The correspondences are as follows:
- The check constraint on the consumed table spans a single interval only. The interval value is 100 for the surviving table, and the check constraint is >= 500 and < 600.
- The conditional expression being attached (< 600) is internally converted to the interval fragment expression format (>= 500 and < 600), which matches the check constraint.
- The indexes on the surviving table are attached (that is, they are fragmented by the same fragmentation scheme as the table), because no fragmentation strategy was specified explicitly in their CREATE INDEX statements.
- The indexes on the consumed table are detached in a single dbspace (dbs4), which is the same dbspace that stored the consumed table.
- For each index on the surviving table, there is a matching index on the consumed table.
- The extra indexes on the consumed table (employee2_ssn_idx and employee2_name_idx) that do not correspond to indexes on the surviving employee table are dropped during the ONLINE ATTACH operation.
You can use the ALTER FRAGMENT DETACH statement to detach a table fragment from a distribution scheme and place the contents into a new nonfragmented table. It provides a way to delete a segment of the table data rapidly. This clause is not valid in ALTER FRAGMENT ON INDEX statements.
The new table that results from executing the DETACH clause does not inherit any indexes or constraints from the original table. Only data values remain.
Similarly, the new table does not inherit any privileges from the original table. Instead, the new table has the default privileges of any new table.
The DETACH clause cannot be applied to a table if that table is the parent of a referential constraint or if a rowid column is defined on the table.
Distribution statistics after DETACH operations
Some ALTER FRAGMENT . . . DETACH operations to attach a fragment can cause the database server to update the index structure. When an index is rebuilt in those cases, the database server also recalculates the associated column distributions, and these statistics are available to the query optimizer when it designs query plans for the table from which the fragment was detached. The statistics are as follows:
- For an indexed column (or for a set of columns) on which ALTER FRAGMENT . . . DETACH automatically rebuilds a B-tree index, the recalculated column distribution statistics are equivalent to distributions created by the UPDATE STATISTICS statement in HIGH mode.
- If the rebuilt index is not a B-tree index, the automatically recalculated statistics correspond to distributions created by the UPDATE STATISTICS statement in LOW mode.
If the automatic mode for updating distribution statistics is enabled, and the table being detached from has fragmented distribution statistics, the database server merges the detached fragment's statistics to form new table distribution. The database server also merges the data distribution statistics of the remaining fragments to form the surviving table distribution statistics and stores the results in the system catalog. This recalculation of fragment statistics runs in the background.
Using the ONLINE keyword in DETACH operations
The ONLINE keyword instructs the database server to commit the ALTER FRAGMENT ... DETACH work internally, if there are no errors, and to apply an intent exclusive lock to the table from which the fragment was detached, rather than an exclusive lock. An exclusive lock is applied to the table that is created from the detached fragment.
You can use the DETACH option to the ALTER FRAGMENT ONLINE ON TABLE statement only for a table that uses a range interval fragmentation scheme.
A table that uses a range interval storage distribution scheme can have either of two types of fragments:
- Range fragments, which are defined by the user in the FRAGMENT BY or PARTITION BY clause of the CREATE TABLE or ALTER TABLE statement
- Interval fragments, which are generated automatically by the database server during INSERT and UPDATE operations if a row has fragment key values above the upper limit of the transition fragment (the last range fragment).
Only an interval fragment can be detached in an ONLINE DETACH operation.
If the detached interval fragment that is not the last fragment, the database server modifies the fragment names for any system-generated interval fragments that follow the detached fragment in the fragment list to match their new sysfragments.evalpos values in the surviving table. During this fragment renaming operation, an exclusive lock is placed on the fragments while the sysfragments system catalog is being updated with the new partition names (and with new evalpos values for any fragments whose ordinal positions within the fragment list changed during the ALTER FRAGMENT DETACH operation).
All indexes on the surviving table must have the same fragmentation scheme as the table. (That is, any indexes must be attached.) For this reason, if there is a primary key constraint or other referential constraints on the table, it is recommended that you first create an attached index for the constraint, and then use the ALTER TABLE statement to add the constraint. (By default, system-created indexes for primary key constraint and for other referential constraints are detached.)
If there are sessions accessing the same partition that is being detached, it is recommended that you issue the SET LOCK MODE TO WAIT statement for enough seconds to prevent nonexclusive access errors.
The SQL statements in Listing 56 define a fragmented employee table that uses a range-interval storage distribution scheme with a unique index employee_id_idx on the column emp_id (that is also the fragmentation key) and another index employee_dept_idx on the column dept_id.
Listing 56. Example of ALTER FRAGMENT ONLINE ... DETACH
CREATE TABLE employee (emp_id INTEGER, name CHAR(32), dept_id CHAR(2), mgr_id INTEGER, ssn CHAR(12)) FRAGMENT BY RANGE (emp_id) INTERVAL (100) STORE IN (dbs1, dbs2, dbs3, dbs4) PARTITION p0 VALUES < 200 IN dbs1, PARTITION p1 VALUES < 400 IN dbs2; CREATE UNIQUE INDEX employee_id_idx ON employee(emp_id); CREATE INDEX employee_dept_idx ON employee(dept_id);
The statements in Listing 57 insert rows with fragment key values above the upper limit of the transition fragment.
Listing 57. Values for the DETACH statement
INSERT INTO employee VALUES (401, "Susan", "DV", 101, "123-45-6789"); INSERT INTO employee VALUES (601, "David", "QA", 104, "987-65-4321");
The inserted rows cause the database server to generate two new interval fragments, so that the resulting fragment list consists of the four fragments shown in Listing 58.
Listing 58. Fragments in surviving table before ALTER FRAGMENT ONLINE
p0 VALUES < 200 - range fragment p1 VALUES < 400 - range fragment (transition fragment) sys_p2 VALUES >= 400 AND VALUES < 500 - interval fragment sys_p4 VALUES >= 600 AND VALUES < 700 - interval fragment
The statement in Listing 59 returns an error, because the specified fragment to be detached is a range fragment (a fragment that stores rows with fragmentation key values below the transition value of 400). Only interval fragments can be detached online.
Listing 59. Range fragment causing an error
ALTER FRAGMENT ONLINE ON TABLE employee DETACH PARTITION p0 employee3;
The statement in Listing 60 runs successfully and creates a new employee3 table to store the data in the detached fragment.
Listing 60. Successful interval fragment DETACH
ALTER FRAGMENT ONLINE ON TABLE employee DETACH PARTITION sys_p2 employee3;
If there are concurrent sessions accessing sys_p2, set the lock mode to WAIT (for a number of seconds sufficient for the ONLINE DETACH operation to be committed) to prevent nonexclusive access errors, as shown in Listing 61.
Listing 61. Setting the lock mode
SET LOCK MODE TO WAIT 300; ALTER FRAGMENT ONLINE ON TABLE employee DETACH PARTITION sys_p2 employee3;
Listing 62 shows the resulting fragments.
Listing 62. Fragments in surviving table after ALTER FRAGMENT ONLINE
p0 VALUES < 200 - range fragment p1 VALUES < 400 - range fragment sys_p4 VALUES >=600 AND VALUES < 700 - interval fragment
Forcing out transactions when altering table fragments
You can enable the server to force out transactions that have opened or have locks on the tables when you issue an ALTER FRAGMENT ON TABLE operation in a logging database. You might want to do this on a busy system, perhaps one that runs 24 hours a day, and you do not want to wait for sessions to close before you alter a fragment.
Be aware that if you enable the server to force out the transactions, the server will roll back other users' transactions. The server also closes the hold cursors during rollback by the session that performs the ALTER FRAGMENT ON TABLE operation.
Note that you must be user informix or have DBA privileges on the database, and the table must be in a logging database.
To force out transactions when altering a table fragment, complete the following steps.
- Set the FORCE_DDL_EXEC environment option of the SET ENVIRONMENT
statement to one of the following values:
1to enable the server to force out transactions that are open or have a lock on the table when an ALTER FRAGMENT ON TABLE statement is issued until the server gets a lock and exclusive access on the table.
- A numeric positive integer that represents an amount of time in seconds. The numeric value enables the server to force out transactions until the server gets exclusive access and exclusive locks on the table or until the specified time limit occurs. If the server cannot force out transactions by the specified amount of time, the server stops attempting to force out the transactions.
SET ENVIRONMENT FORCE_DDL_EXEC '100';.
- Set the lock mode to wait to ensure that the server will wait a
specified amount of time before forcing out any transactions.
For example, to set the lock mode to wait for 20 seconds,
SET LOCK MODE TO WAIT 20;
- Run an ALTER FRAGMENT ON TABLE statement to attach, detach, modify, add, or drop the fragment.
After you complete an ALTER FRAGMENT ON TABLE operation with the
FORCE_DDL_EXEC environment option enabled, you can turn the
FORCE_DDL_EXEC environment option off. For example, specify
SET ENVIRONMENT FORCE_DDL_EXEC 'OFF'.
Defining a strategy for fragmenting indexes
When you fragment a table, the indexes that are associated with that table are fragmented implicitly, according to the fragmentation scheme that you use. You can also use the FRAGMENT BY EXPRESSION clause of the CREATE INDEX statement to fragment the index for any table explicitly.
Each index of a fragmented table occupies its own tblspace with its own extents. You can fragment the index using either the same fragmentation strategy as the table or using a different fragmentation strategy from the table.
Using an attached index
An attached index is an index that implicitly follows the table fragmentation strategy (distribution scheme and set of dbspaces in which the fragments are located). The database server automatically creates an attached index when you first fragment a table.
To create an attached index, do not specify a fragmentation strategy or storage option in the CREATE INDEX statement, as shown in Listing 63.
Listing 63. Creating an attached index
CREATE TABLE tb1(a int) FRAGMENT BY EXPRESSION (a >=0 AND a < 5) IN dbsbspace1, (a >=5 AND a < 10) IN dbspace2 ... ; ... CREATE INDEX idx1 ON tb1(a);
For fragmented tables that use the expression-based or round-robin distribution schemes, you can also create multiple partitions of a table or index within a single dbspace. This enables you to reduce the number of required dbspaces, thereby simplifying the management of dbspaces.
To create an attached index with partitions, include the partition name in your SQL statements, as shown in Listing 64.
Listing 64. Include the partition name
CREATE TABLE tb1(a int) FRAGMENT BY EXPRESSION PARTITION part1 (a >=0 AND a < 5) IN dbs1, PARTITION part2 (a >=5 AND a < 10) IN dbs1 ... ; ... CREATE INDEX idx1 ON tb1(a);
You can use PARTITION BY EXPRESSION instead of FRAGMENT BY EXPRESSION in CREATE TABLE, CREATE INDEX, and ALTER FRAGMENT ON INDEX statements, as shown in Listing 65.
Listing 65. Using PARTITION BY EXPRESSION
ALTER FRAGMENT ON INDEX idx1 INIT PARTITION BY EXPRESSION PARTITION part1 (a <= 10) IN dbs1, PARTITION part2 (a <= 20) IN dbs1, PARTITION part3 (a <= 30) IN dbs1;
Use ALTER FRAGMENT syntax to change fragmented indexes that do not have partitions into indexes that have partitions. The syntax in Listing 66 shows how you might convert a fragmented index into an index that contains partitions.
Listing 66. Using ALTER FRAGMENT syntax
CREATE TABLE t1 (c1 int) FRAGMENT BY EXPRESSION (c1=10) IN dbs1, (c1=20) IN dbs2, (c1=30) IN dbs3 CREATE INDEX ind1 ON t1 (c1) FRAGMENT BY EXPRESSION (c1=10) IN dbs1, (c1=20) IN dbs2, (c1=30) IN dbs3 ALTER FRAGMENT ON INDEX ind1 INIT FRAGMENT BY EXPRESSION PARTITION part_1 (c1=10) IN dbs1, PARTITION part_2 (c1=20) IN dbs1, PARTITION part_3 (c1=30) IN dbs1,
Creating a table or index containing partitions improves performance by enabling the database server to search more quickly and by reducing the required number of dbspaces. The database server fragments the attached index according to the same distribution scheme as the table by using the same rule for index keys as for table data. As a result, attached indexes have the following physical characteristics:
- The number of index fragments is the same as the number of data fragments.
- Each attached index fragment resides in the same dbspace as the corresponding table data, but in a separate tblspace.
- An attached index or an index on a nonfragmented table uses 4 bytes for the row pointer for each index entry.
IBM Informix does not support forest of trees attached indexes.
Setting up detached indexes
A detached index is an index with a separate fragmentation strategy that you set up explicitly with the CREATE INDEX statement. The sample SQL statements in Listing 67 and Listing 68 create a detached index.
Listing 67. Creating a detached index
CREATE TABLE tb1 (a int) FRAGMENT BY EXPRESSION (a <= 10) IN tabdbspc1, (a <=20) IN tabdbspc2, (a <= 30) IN tabdbspc3;
Listing 68. Creating another detached index
CREATE INDEX idx1 ON tb1 (a) FRAGMENT BY EXPRESSION (a <= 10) IN idxdbspc1, (a <= 20) IN idxdbspc2, (a <= 30) IN idxdbspc3;
This example illustrates a common fragmentation strategy: fragment indexes in the same way as the tables, but specify different dbspaces for the index fragments. This fragmentation strategy of putting the index fragments in different dbspaces from the table can improve the performance of operations such as backup, recovery, and so forth.
By default, all new indexes that the CREATE INDEX statement creates are detached and stored in separate tablespaces from the data, unless you specify the deprecated IN TABLE syntax.
To create a detached index with partitions, include the partition name in your SQL statements, as shown in Listing 69 and Listing 70.
Listing 69. Creating a detached index with partitions
CREATE TABLE tb1 (a int) FRAGMENT BY EXPRESSION PARTITION part1 (a <=10) IN dbs1, PARTITION part2 (a <= 20) IN dbs2, PARTITION part3 (a <= 30) IN dbs3;
Listing 70. Creating a detached index with partitions
CREATE INDEX idx1 ON tb1 (a) FRAGMENT BY EXPRESSION PARTITION part1 (a <= 10) IN dbs1, PARTITION part2 (a <=20) IN dbs2, PARTITION part3 (a <= 30) IN dbs3;
You can use PARTITION BY EXPRESSION instead of FRAGMENT BY EXPRESSION in CREATE TABLE, CREATE INDEX, and ALTER FRAGMENT ON INDEX statements.
If you do not want to fragment the index, you can put the entire index in a separate dbspace.
You can fragment the index for any table by expression. However, you cannot explicitly create a round-robin fragmentation scheme for an index. Whenever you fragment a table using a round-robin fragmentation scheme, convert all indexes that accompany the table to detached indexes for the best performance.
Each detached index fragment resides in a different tblspace from the corresponding table data. Therefore, the data and index pages cannot be interleaved within the tblspace.
Detached index fragments have their own extents and tblspace IDs. The tblspace ID is also known as the fragment ID and partition number. A detached index uses 8 bytes of disk space per index entry for the fragment ID and row pointer combination.
The database server stores the location of each table and index fragment, along with other related information, in the system catalog table sysfragments. You can use the sysfragments system catalog table to access the following information about fragmented tables and indexes:
- The value in the partn field is the partition number or fragment ID of the table or index fragment. The partition number for a detached index is different from the partition number of the corresponding table fragment.
- The value in the strategy field is the distribution scheme used in the fragmentation strategy.
Understanding restrictions on indexes for fragmented tables
If the database server scans a fragmented index, multiple index fragments must be scanned, and the results must be merged together. (The exception is if the index is fragmented according to some index-key range rule, and the scan does not cross a fragment boundary.) Because of this requirement, performance on index scans might suffer if the index is fragmented.
Because of these performance considerations, the database server places the following restrictions on indexes:
- You cannot fragment indexes by round-robin.
- You cannot fragment unique indexes by an expression that contains columns that are not in the index key.
For example, the statement in Listing 71 is not valid.
Listing 71. Invalid SQL statement
CREATE UNIQUE INDEX ia on tab1(col1) FRAGMENT BY EXPRESSION col2 < 10 in dbsp1, col2>=10 AND col2 < 100 in dbsp2, col2 > 100 in dbs
You can use data compression and consolidation methods to minimize the disk space used by the data and improve the I/O performance of IBM Informix. Table 3 shows when to use certain methods to minimize used disk space.
Table 3. Methods to reduce the amount of disk space used by data
|Storage optimization method||Purpose||When to use|
|Compressing data||Compresses data in table or fragment rows, reducing the amount of required disk space. After you compress data, you can also consolidate the free space that remains in the table or fragment, and return the free space to the dbspace.||When you want to reduce the size of data in a table|
|Repacking data||Consolidates free space in tables and fragments||After you compress data or separately when you want to consolidate free space in the table or fragment|
|Shrinking data||Returns free space to the dbspace||After you compress or repack data and want to return free space to the dbspace|
|Defragmenting table extents||Brings data rows closer together in contiguous, merged extents||When frequently updated tables become scattered among multiple non-contiguous extents|
Compression is the capability of the server to store data rows in compressed format on disk. Compressing data has the following benefits:
- Significant savings in disk storage space
- Reduced disk usage for compressed fragments
- Significant saving of logical log usage, which saves additional space and can prevent bottlenecks for high-throughput OLTP after the compression operation is completed
- Smaller buffer pools, because more data fits in the same size pool
- Reduced I/O activity for the following reasons:
- More compressed rows than uncompressed rows fit on a page.
- Log records for insert, update, and delete operations of compressed rows are smaller.
- Older fragments of time-fragmented data that are not often accessed are compressed, while more recent data that is frequently accessed is left in uncompressed form.
- Space no longer required for a table is freed.
- Backup and restore is performed faster.
I/O-bound tables (for example, those with bad cache hit ratios) are good candidates for compression. In OLTP environments, compressing I/O-bound tables can improve performance.
If your applications run with high buffer cache hit ratios and if high performance is more important than space usage, you might not want to compress your data, because compression might slightly decrease performance.
Because compressed data covers fewer pages and has more rows per page than uncompressed data, the query optimizer might choose different plans after compression.
If you use enterprise replication (ER), compressing data on one replication server does not affect the data on any other replication server.
You cannot use the onload and onunload utilities to move compressed data from one database to another. You must uncompress data in compressed tables and fragments before using the onload and onunload utilities.
Perform compression-related operations by running SQL statements that contain SQL administration API commands with compression parameters. Compression-related operations include enable compression, estimate_compression, create_dictionary, compress, repack, repack_offline, shrink, uncompress, and uncompress_offline operations.
Compress operations such as compress, uncompress, and uncompress_offline can consume large amounts of log files You need to configure your logs to be larger if any workload that you expect to run, including but not limited to these compression operations, will consume log files faster than one every 30 seconds.
Do not drop a dbspace that the Change Data Capture (CDC) API is using if the dbspace ever contained compressed tables. This might delete compression dictionaries that CDC still requires. The main alternative to compression is to buy more physical storage. The main alternative for reducing bottlenecks in I/O-bound workloads is to buy more physical memory to enable the expansion of the buffer pools.
Learning which data you can compress
Table or table-fragment data with frequently repeating long patterns is very compressible. Certain types of data, such as text, might be more compressible than other types of data, such as numeric data, because data types, such as text, might contain longer and more frequently repeating patterns.
However, you cannot predict a compression ratio based only on the type of data. The following circumstances also affect the compression ratio.
- Text in different languages or character sets might have different compression ratios, even though the text is stored in CHAR or VARCHAR columns.
- Numeric data that consists mostly of zeroes might compress well, while more variable numeric data might not compress well.
- Data with long runs of blank spaces compresses well.
- Data that has already been compressed using some other algorithm or data that has been encrypted might not compress well.
IBM Informix can compress any combination of data types, because it treats all data to be compressed as unstructured sequences of bytes. Thus, the server can compress patterns that span columns, for example, in city, state, and zip code combinations. (IBM Informix uncompresses a sequence of bytes in the same sequence that existed before the data was compressed.)
There are no restrictions on the types of data that can be compressed.
Compression is applied only to the contents of data rows, including any remainder pieces for rows that span pages, and to the images of those rows that are contained in logical log records.
Many types of large-object data (such as images and sound samples) in rows might already be compressed, so compressing the data again would not achieve any additional saving of space.
Learning which data you cannot compress
You cannot compress data in indexes, and you cannot compress data in some types of tables and fragments.
You cannot compress data in rows in the following places:
- Tables or fragments in the sysmaster, sysutils, sysuser, syscdr, and syscdcv1 databases
- Temporary tables
- Virtual-table interface tables
- A tblspace tblspace. These are hidden fragments, one per dbspace. Each one holds metadata about all of the fragments in the dbspace.
- Internal partition tables
- Dictionary tables, one per dbspace. These tables hold compression dictionaries for the fragments or tables that are compressed in that dbspace and metadata about the dictionaries.
- A table if an online index build is occurring on the table
Compression is not applied to index data, LOB data that is stored outside of the row, or any other form of non-row data.
Encrypted data, data that is already compressed by another algorithm, and data without long repeating patterns compresses poorly or does not compress. Try to avoid placing columns with data that compresses poorly between columns that have frequent patterns to prevent the potential disruption of column-spanning patterns that can be compressed.
If XML data is stored with the first portion in a row and the remainder outside of the row, compression is only applied to the portion that is stored in the row.
IBM Informix compresses images of the rows only if the images of the compressed rows will be smaller than the uncompressed images. Even if compressed rows are only slightly smaller than their uncompressed images, a small saving of space can enable the server to put more rows onto pages.
Understanding compression ratios
The compression ratio depends on the data being compressed. The compression algorithm that IBM Informix uses is a dictionary-based algorithm that performs operations on the patterns of the data that were found to be the most frequent, weighted by length, in the data that was sampled at the time the dictionary was built.
If the typical data distribution skews away from the data that was sampled when the dictionary was created, compression ratios can decrease.
The maximum possible compression ratio is 90 percent. This is because the maximum possible compression of any sequence of bytes occurs by replacing each group of 15 bytes with a single 12-bit symbol number, yielding a compressed image that is 10 percent of the size of the original image. However, the 90 percent ratio is never quite achieved, because IBM Informix adds a single byte of metadata to each compressed image.
Before you compress a table or table fragment, IBM Informix can estimate the amount of space you can save if data is compressed. The ratios you display are estimates based on samples of row data. The actual ratio of saved space might vary slightly.
IBM Informix estimates the compression ratios by random sampling of row data (using the same sampling algorithm as dictionary building) and then summing up the sizes of the following items:
- Uncompressed row images
- Compressed row images using a new compression dictionary, which is temporarily created by the estimate compression command
- Compressed row images using the existing dictionary if there is one. If there is no existing dictionary, this value is the same as the sum of sizes of uncompressed row images.
The actual space-saving ratios achieved might vary because of a small sampling error or because the estimates are based on raw compressibility of the rows. For example, the server generally tries to put the entire row onto a single page. So, if each uncompressed row nearly fills a complete page and the compression ratio is less than 50 percent, the compressed rows will still fill more than half a page each, and the server will tend to put each row on a separate page even after compression. In this case, although the estimated compression ratio might be something like 45 percent, the actual space savings might turn out to be 0 percent.
Uncompressed rows fill slightly more than half a page each. Thus, each uncompressed row will actually consume a full page because two full rows do not fit. For example, the estimated compression ratio might be something like 5 percent, but this might be just enough to shrink the rows to be less than half a page each. Thus, after compression, two rows would fit on a page, and the true space savings might be 50 percent.
The actual compression achieved might also vary from the estimate because IBM Informix can never store more than 255 rows on a single page. Thus, small rows or large pages can reduce the total savings that compression can achieve. For example, if 200 rows fit onto a page before compression, no matter how small the rows are when compressed, the maximum effective compression ratio is approximately 20 percent, because only 255 rows can fit on a page after compression.
If you are using a page size that is larger than the minimum page size, one way to increase the realized compression space savings is to switch to smaller pages such that the following is true:
- The 255 row limit can no longer be reached.
- If this limit is still reached, there is less unused space on the pages.
More (or less) space can be saved, compared to the estimate, if the compress operation is combined with a repack operation, a shrink operation, or a repack-and-shrink operation. The repack operation can save additional space only if more compressed rows fit on a page than uncompressed rows. The shrink operation can save space at the dbspace level if the repack operation frees space.
Exploring compression dictionaries
A separate compression dictionary exists for each compressed fragment and for each compressed non-fragmented table. Each compression dictionary is a library of frequently occurring patterns in the fragment or table data and the symbol numbers that replace the patterns.
A compression dictionary is built using data that is sampled randomly from a fragment or non-fragmented table that contains at least 2,000 rows. If the fragment or table does not contain 2,000 rows, IBM Informix will not build a compression dictionary.
The compression dictionary can store a maximum of 3,840 patterns, each of which can be from 2 to 15 bytes in length. (Patterns that are longer than seven bytes reduce the total number of patterns that the dictionary can hold.) Each of these patterns is represented by a 12-bit symbol number in a compressed row. To be compressed, a sequence of bytes in the input row image must exactly match a complete pattern in the dictionary. A row that does not have enough pattern matches against the dictionary might not be compressible, because each byte of an input row that did not completely match is replaced in the compressed image by 12 bits (1.5 bytes).
IBM Informix attempts to capture the best compressible patterns (the frequency of the pattern multiplied by the length). Data is compressed by replacing occurrences of the patterns with the corresponding symbol numbers from the dictionary, and replacing occurrences of bytes that do not match any pattern with special reserved symbol numbers.
All dictionaries for the tables or fragments in a dbspace are stored in a hidden dictionary table in that dbspace. The syscompdicts_full table and the syscompdicts view in the sysmaster database provide information about the compression dictionaries.
Typically, approximately 100 KB of space is required for storing the compression dictionary for a compressed fragment or table. Thus, very small tables are not good candidates for compression, because you might not be able to gain back enough space from compressing the rows to offset the storage cost of the compression dictionary.
Additionally, IBM Informix cannot compress an individual row to be smaller than 4 bytes long. This is because the server must leave room in case the row image later grows beyond what the page can hold. Therefore, you should not try to compress fragments or non-fragmented tables with rows that contain four bytes or are shorter than four bytes.
Using the API interface to manage compression
All compression operations are executed using an SQL administration API admin() or task() command. The compress operation creates a compression dictionary if one does not exist, and it compresses the rows without moving them.
Following are the prerequisites to use the API commands:
- There must be at least 2,000 rows on a table or in each fragment of the table, not just a total of 2,000 rows in the table as a whole.
- You must be able to connect to the sysadmin database (by default
informix), and you must be a database system administrator.
- You must configure your logs to be larger than they currently are if any workload that you expect to run, including but not limited to a compress operation, will consume log files faster than one every 30 seconds.
For a table, the table name is mandatory. The database and owner names are optional. If you do not specify a database or owner name, IBM Informix uses the current database and owner name.
For a fragment, the partnum_list is a space-separated list of partition numbers that belong to the same table.
Estimating compression ratios
This operation displays the estimated compression ratio that can be achieved, the current compression ratio, an estimate of the percentage gain or loss, the partition number of each fragment, and the full name of the table, including the database, owner, and table names. The current ratio is 0 percent if the table is not compressed.
Output from compression estimates for tables and fragments looks the same, except that the output for a table always shows all fragments in the table, while the output for a fragment only shows information for the specified fragments. Listing 72 shows the output of a compression ratio estimate for a table.
Listing 72. Estimating a table
EXECUTE FUNCTION task("table estimate_compression", "table_name", "database_name", "owner_name");
Listing 73 shows the output of a compression ratio estimate for a fragment.
Listing 73. Estimating a fragment
EXECUTE FUNCTION task("fragment estimate_compression", "partnum_list");
For example, Listing 74 shows a command that tells IBM Informix to estimate the benefit of compressing a table named cash_transaction in a store123 database in which wong is the owner.
Listing 74. Results of fragment compression
EXECUTE FUNCTION task ("table estimate_compression", "cash_transaction", "store123", "wong");
In the example in Table 4, the results in the first row are for a fragment that has already been compressed. The second fragment is not compressed. If you recompress the first fragment, a .4 percent increase in saved space can occur. If you compress the second fragment, a 75.7 percent increase can occur.
Table 4. Estimating compression ratios
|Estimated compression||Current compression||Change||Part number||Table|
Creating a compression dictionary
You can create a compression dictionary based on existing rows for IBM Informix to use when compressing data in tables or table fragments. After you create the dictionary, IBM Informix will use the dictionary to compress newly inserted or updated rows.
If a compression dictionary does not exist, you can also create one when you run the compress command. The only difference between the two commands is that the compress command also compresses existing data in the table or fragment.
If you want to create a compression dictionary for a fragment, the fragment must contain at least 2,000 rows. If you want to create a compression dictionary for a table, each fragment of the table must contain at least 2,000 rows.
Listing 75 shows the output of the compression dictionary API commands for a table.
Listing 75. Compression dictionary API commands for a table
EXECUTE FUNCTION task("table create_dictionary", "table_name","database_name", "owner_name");
Listing 76 shows the output of the compression dictionary API commands for a fragment.
Listing 76. Compression dictionary API commands for a fragment
EXECUTE FUNCTION task("fragment create_dictionary", "partnum_list");
Listing 77 shows an example command to create a compression dictionary table.
Listing 77. Creating a compression dictionary
EXECUTE FUNCTION task("table create_dictionary","classical","music","Joe");
To compress data in existing table or fragment rows after you create the compression dictionary, you must run a compress command.
You can delete a compression dictionary only after you uncompress the table or fragment.
Compressing tables and table fragments
The compress operation creates a compression dictionary if one does not exist, and it compresses rows without moving them. However, space allocated to the table will be the same after the row is compressed.
Listing 78 shows the statement for a compression for a table.
Listing 78. Compressing a table
EXECUTE FUNCTION task("table compress", "table_name", "database_name", "owner_name");
Listing 79 shows an example.
Listing 79. Example of compressing a table
EXECUTE FUNCTION task("table compress","opera","music","bob");
Listing 80 shows the statement for a compression for a fragment.
Listing 80. Compressing a fragment
EXECUTE FUNCTION task("fragment compress", "partnum_list");
Listing 81 shows an example.
Listing 81. Example of compressing a fragment
EXECUTE FUNCTION admin("fragment compress","14680071");
If you interrupt a compress operation and reissue a compress command, IBM Informix continues compressing, using the existing compression dictionary.
If you change the fragmentation strategy for a table after you perform a compression operation, you must recompress.
Consolidating free space in tables
If you want to consolidate free space or return free space without compressing or recompressing, you can run a command that tells the server to repack, shrink, or repack and shrink. You can optionally expand the arguments to include repack and shrink in any of the following combinations:
- Compress repack
- Compress repack shrink
- Compress shrink
You can consolidate (repack) free space in tables and fragments when you compress the tables or fragments or separately without compressing. This operation will release free spaces at the end of the table to the dbspaces for other tables to use.
You can perform a repack operation online or offline using the repack or repack_offline argument. The repack_offline operation is the same as the repack operation, except that IBM Informix performs the operation while holding an exclusive lock on the table or fragment. This operation prevents all other access to data until the operation is completed.
If light appends occur in a table or fragment while a repack operation is occurring, the repack operation will not complete the consolidation of space at the end of a table or fragment. The repack operation does not complete because the new extents are added in the location where the repack operation has already occurred, so space cannot be returned to the dbspace.
To complete the repack process, you must run a second repack operation after the light append activity has completed. This second repack operation builds on the work of the first repack operation.
Dropping or disabling indexes before you complete a repack_offline operation can decrease the amount of time that it takes the server to complete the operation. Afterward, you can re-create or re-enable the indexes, preferably taking advantage of PDQ. Dropping or disabling the indexes and then creating or enabling them again can be faster than completing a repack_offline operation.
Listing 82 shows the statement for consolidating free space online or offline for a table.
Listing 82. Consolidating free space for a table
EXECUTE FUNCTION task("table repack", "table_name", "database_name", "owner_name"); EXECUTE FUNCTION task("table repack_offline", "table_name","database_name", "owner_name");
Listing 83 shows the statement for consolidating free space online or offline for a fragment.
Listing 83. Consolidating free space for a fragment
EXECUTE FUNCTION task("fragment repack", "partnum_list"); EXECUTE FUNCTION task("fragment repack_offline", "partnum_list");
Optionally expand the arguments to include compress and shrink in any of the following combinations:
- Compress repack
- Compress repack shrink
- Repack shrink
Listing 84 shows some examples.
Listing 84. Examples of consolidating free space
EXECUTE FUNCTION task("table repack","opera","music","bob"); EXECUTE FUNCTION task("table repack_offline","folk","music","janna"); EXECUTE FUNCTION task("fragment repack shrink", "14680071");
You can cancel a command with a compress argument, for example, by typing CTRL+C in DB-Access. You can reissue commands with repack or repack_offline arguments after a prior interrupted command. Compress and repack operations are logged, but they run in small portions.
Returning free space to the dbspace
You can return free space to the dbspace (shrink the space) when you compress, repack, or compress and repack tables or fragments. Or you can return free space separately without compressing or repacking. Returning free space reduces the total size of the fragment or table.
You can safely shrink the entire table without compromising the allocation strategy of the table. For example, if you have a fragmented table with one fragment for each day of the week and many fragments pre-allocated for future use, you can shrink the table without compromising this allocation strategy. If the table is empty, IBM Informix shrinks the table to the initial extent size specified when the table was created.
When you initiate a shrink operation, IBM Informix shortens extents as follows:
- It shortens all extents except the first extent to as small a size as possible.
- If the table is entirely in the first extent (for example, because the table is an empty table), IBM Informix does not shrink the first extent to a size that was smaller than the extent size that was specified when the table was created with the CREATE TABLE statement.
You can use the MODIFY EXTENT SIZE clause of the ALTER TABLE statement to reduce the current extent size. After you do this, you can rerun the shrink operation to shrink the first extent to the new extent size.
Listing 85 shows the statements for returning free space to the dbspace for a table.
Listing 85. Returning free space to the dbspace for a table
EXECUTE FUNCTION admin("table shrink", "table_name","database_name", "owner_name");
Listing 86 shows the statements for returning free space to the dbspace for a fragment.
Listing 86. Returning free space to the dbspace for a fragment
EXECUTE FUNCTION task("fragment shrink", "partnum_list");
You can optionally expand the arguments to include compress and repack in any of the following combinations:
- Compress repack shrink
- Compress shrink
- Repack shrink
Listing 87 shows some examples.
Listing 87. Examples of returning free space to the dbspace
EXECUTE FUNCTION task("table shrink","opera","music","bob"); EXECUTE FUNCTION task("fragment repack shrink," "14680071");
You can uncompress previously compressed tables and fragments. Uncompressing a table or fragment deactivates compression for new insert and update operations, uncompresses all compressed rows, deactivates the compression dictionary, and allocates new pages for rows that no longer fit on their original pages.
You can uncompress online or offline, using the uncompress or uncompress_offline argument. An uncompress_offline operation is the same as the uncompress operation, except that this operation is performed while holding an exclusive lock on the fragment, preventing all other access to the fragment data until the operation is completed.
Dropping or disabling indexes before you complete an uncompress_offline operation can decrease the amount of time that it takes the server to complete the operation. Afterward, you can re-create or re-enable the indexes, preferably taking advantage of PDQ. Dropping or disabling the indexes and then creating or enabling them again can be faster than completing a repack_offline or uncompress_offline operation.
The table or fragments must be compressed.
Listing 88 shows the statement for uncompressing online or offline for a table.
Listing 88. Uncompressing online or offline for a table
EXECUTE FUNCTION task("table uncompress", "table_name","database_name","owner_name"); EXECUTE FUNCTION admin("table uncompress_offline","table_name","database_name", "owner_name");
Listing 89 shows the statement for uncompressing online or offline for a fragment.
Listing 89. Uncompressing online or offline for a fragment
EXECUTE FUNCTION task("fragment uncompress", "partnum_list"); EXECUTE FUNCTION task("fragment uncompress_offline", "partnum_list");
Listing 90 shows some examples.
Listing 90. Examples of uncompressing
EXECUTE FUNCTION task("table uncompress","rock","music","mario"); EXECUTE FUNCTION task("fragment uncompress_offline","14680071");
If a table is uncompressed, IBM Informix marks the dictionary for that table as inactive. IBM Informix does not delete dictionaries, because enterprise replication functions use the dictionaries for older logs. You can delete the dictionaries that you no longer require.
You can cancel a command with an uncompress argument, for example, by typing CTRL+C in DB-Access.
You can reissue commands with uncompress and uncompress_offline arguments after a prior interrupted command. Compress, repack, and uncompress operations are logged, but they run in small portions.
Deleting compression dictionaries
You can delete an inactive compression dictionary for a specific table or fragment, you can delete all inactive compression dictionaries, or you can delete all inactive compression dictionaries up to a specified date. You must uncompress tables and fragments, which makes the dictionaries inactive, before you delete any compression dictionaries that were created for the tables and fragments.
Do not remove compression dictionaries that enterprise replication requires.
Prerequisites to deleting compression dictionaries
- Uncompress or drop the table or fragment before deleting the associated dictionary. You only can delete a compression dictionary that is no longer used by a compressed table or fragment.
- Be sure that enterprise replication functions are not using the compression dictionary for older logs.
- Archive any dbspace that contains a table or fragment with a compression dictionary, even if you uncompressed data in the table or fragment and the dictionary is no longer active.
APIs to delete compression dictionaries
Table 5 offers the API commands to delete compression dictionaries.
Table 5. API commands to delete compression dictionaries
|Admin() or task() function argument||Function|
|Deletes a specific, inactive dictionary|
|Deletes all dictionaries|
|Deletes all dictionaries that were created before and on a
specific date. Note that you can use any date in a format that can
be converted to a DATE data type based on your locale and
environment. For example, you can specify |
Listing 91 shows an example.
Listing 91. Example of purging compression dictionary
EXECUTE FUNCTION task("compression purge_dictionary","03/08/09");
Moving compressed data
You can use the high-performance loader (HPL) or any of the other IBM Informix utilities, except the onunload and onload utilities, to move compressed data from one database to another. You must uncompress data in compressed tables and fragments before using the onunload and onload utilities.
The dbexport utility automatically uncompresses compressed data during export. Therefore, if your database contains tables or fragments with compressed data, you must recompress after you use the dbimport utility to import the data.
Monitoring compression information
You can use IBM Informix utilities, a sysmaster database table, and a sysmaster view to display compression statistics, information about compression dictionaries, and the compression dictionary. Table 6 shows utilities, table, and views that offer compression information.
Table 6. Utilities, sysmaster table, and views that show compression information
|Utilities, table, or view||Description|
|Displays the number of any compressed rows in a table or table fragment and the percentage of table or table-fragment rows that are compressed. If table or fragment rows are not compressed, the Compressed Data Summary section is not included in the output.|
|Uses the compression dictionary to expand compressed data and to display the uncompressed contents of compressed log records.|
|Displays information that shows the progress of currently running compression operations.|
|Displays information about the active compression dictionaries that exist for currently open compressed fragments (also called partitions). This option shows the same information as the syscompdicts view in the sysmaster database.|
|Displays metadata about the compression dictionary and the
compression dictionary binary object. Only user |
|Displays the same information as the syscompdicts_full table, except that for security reasons, it excludes the dict_dictionary column, which contains the compression dictionary binary object.|
You can use an UNLOAD statement to unload the compression dictionary from the syscompdicts_full table to the compression dictionary file, as shown in Listing 92.
Listing 92. Unloading the compression dictionary
UNLOAD TO 'compression_dictionary_file' SELECT * FROM sysmaster:syscompdicts_full;
You can improve performance by defragmenting partitions to merge non-contiguous extents.
A frequently updated table can become fragmented over time, which degrades performance every time the server accesses the table. Defragmenting a table brings data rows closer together and avoids partition header page overflow problems. Defragmenting an index brings the entries closer together, which improves the speed at which the table information is accessed.
The defragment operation reorganizes the table and index in fewer and larger contiguous extents. A common misconception about the defragment operation is that it increases log usage.
To determine how many extents a table, index, or partition has, you can run the oncheck -pt and oncheck -pT commands.
To defragment a table, index, or partition, run the SQL administration API task() or admin() function with the defragment argument or the defragment partnum argument and specify the table name, index, or partition number that you want to defragment.
Limitations and considerations
Before you defragment a partition, review these important considerations:
- You cannot stop a defragment request after the request has been submitted.
- You cannot defragment the following objects:
- Pseudo tables, such as virtual-table interface (VTI) tables
- Tables with virtual-index interface (VII) indexes
- Tables with b-tree functional indexes
- Temporary tables
- External tables
- Sort files
- Optical BLOB files
- System catalog tables
- Do not issue conflicting operations on a table or partition
that you want to defragment. The first operation must complete
before the second operation is started. If the first operation is
still running, the request for the second operation returns an
error. The following list contains examples of conflicting
- Only one defragment request can operate on a partition at a time.
- Only one defragment request can operate on a dbspace at a time.
- A table cannot be defragmented when DDL statements, such as DROP TABLE or ALTER FRAGMENT, are being run on the table or partition.
- A table cannot be defragmented when the table is being truncated.
- A table cannot be defragmented when the table is being compressed or uncompressed.
- A table cannot be defragmented when an online index build is being run.
- A table cannot be defragmented that has exclusive access set on the table.
If there are problems completing a defragment request, error messages are sent to the online log file.
API interface to defragment a table
You can use the defragment argument with the admin() or task() function to defragment tables or indexes to merge non-contiguous extents. You can specify either the defragment argument or defragment partnum argument using the syntax in Listing 93.
Listing 93. Defragmenting a table
EXECUTE FUNCTION task("defragment","database:owner:table") EXECUTE FUNCTION task("defragment partnum", "partnum_list")
Table 7 shows the elements and considerations for defragmenting a table
Table 7. Elements information
|Database||Name of the database that includes the table or index that you want to defragment|
|Owner||User ID of the owner of the table|
|Table||Name of the table to defragment|
|Partnum_list||One or more partition numbers to defragment||Use a comma-separated list of partition numbers to specify more than one partition.|
Use the defragment argument to defragment specific tables. Use the defragment partnum argument to defragment one or more specific disk partitions.
Information about defragmentation is stored in shared memory. Use the oncheck -pt and oncheck -pT commands to display information about the number of extents for a specific table or fragment. Use the onstat -g defragment to display information about active defragment requests.
If the defragment request reduces the number of extents by at least 1 extent, the request returns 0 (success), even if there are many extents in the partition.
If a partition has a single extent, the defragment request returns 0 to indicate that the request was a success, even though no extents were merged.
Listing 94 shows an example of how to defragment the customer table in the stores_demo database.
Listing 94. Defragment the customer table in the stores_demo database
EXECUTE FUNCTION task("defragment","stores_demo:informix.customer");
Listing 95 shows an example of how to specify the partition number to defragment an index.
Listing 95. Specify the partition number to defragment an index
EXECUTE FUNCTION task("defragment partnum","2097154");
Listing 96 shows an example of how to defragment a list of partitions.
Listing 96. Defragment a list of partitions
EXECUTE FUNCTION task("defragment partnum", "16777217,28477346");
- See the other tutorials in the series to continue your education.
- Use an RSS feed to request notification for the upcoming tutorials in this series. (Find out more about RSS feeds of developerWorks content.)
- Go to the developerWorks Informix page to read articles and tutorials and to get the resources you need to expand your IBM Informix skills.
- Refer to the IBM Informix v11.70 Information Center to learn more about IBM Informix. Find the information that you need to use IBM Informix products and features.
- Look in the IBM Informix v11.70 Administrator's Reference to learn more about the onmode utility.
- Learn more about resources for IBM Informix certification at the IBM Information Management certification webpage.
- Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
- Follow developerWorks on Twitter.
Get products and technologies
- Download Informix Innovator-C Edition for free for small workload productions.
- Download Informix Ultimate Edition for a trial version.
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Participate in the discussion forum.
- Check out the developerWorks blogs and get involved in the developerWorks community.
Dig deeper into Information management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.