System Administration Certification exam 919 for Informix 11.70 prep, Part 5

Informix backup and restore

Archiving Informix databases


Content series:

This content is part # of # in the series: System Administration Certification exam 919 for Informix 11.70 prep, Part 5

Stay tuned for additional content in this series.

This content is part of the series:System Administration Certification exam 919 for Informix 11.70 prep, Part 5

Stay tuned for additional content in this series.

Before you start

Learn what to expect from this tutorial, and how to get the most out of it.

About this series

Thinking about seeking certification on System Administration for Informix version 11.70 (Exam 919)? If so, you've landed in the right spot to get started. This series of Informix certification preparation tutorials covers all the topics you'll need to understand before you read that first exam question. Even if you're not seeking certification right away, this set of tutorials is a great place to start learning what's new in Informix 11.70.

About this tutorial

In this tutorial, you'll learn about Informix backup and restore concepts, strategies, utilities, and commands for managing your database backup and restore processes. The material provided here primarily covers the objectives in Section 5 of the exam, entitled Backup and Restore.

Topics covered in this tutorial include the following:

  • Backup and restore utilities
  • Backup and restore strategies, types, and options
  • Configuration and commands needed to perform the backup and restore processes
  • Monitoring and debugging backup and restores
  • Database recovery operations


After completing this tutorial, you should be able to:

  • Develop an archiving strategy and schedule for database server backups
  • Know which type of backup to perform and when
  • Configure ON-Bar and ontape to perform backups and restores
  • Know how to monitor and verify backups
  • Understand recovery
  • Restore the database server from an archive backup
  • Optimize the fast recovery process


To understand the material presented in this tutorial you must be familiar with the following:

  • The Informix environment (configuration file and parameters, installation and administration)
  • Database server commands (onstat, onmode, oncheck, dbschema)
  • Informix concepts and terminology (dbspaces, chunks, physical log, logical logs, checkpoint, and so on)

System requirements

You do not need a copy of Informix to complete this tutorial. However, you will get more out of the tutorial if you download the free trial version of Informix Innovator-C Edition (see Related topics) to work along with this tutorial.

Informix backup and restore overview

As data becomes the most expensive commodity, businesses cannot afford to lose it. There are many different ways to ensure data availability, from having redundant systems using SDS, HDR, or RSS, to backup and recovery strategies that might involve duplicate hardware and system recovery. This tutorial explains the different ways that Informix can help you to plan for data recovery, when a disaster strikes. There are many different approaches, but all approaches require copies of the data to be available in case of failure.

Understanding backup, restore, and recovery tools

The Informix server offers multiple ways to save data. Data can be unloaded logically using tools such as dbexport, HPL, or external tables. Or data can be unloaded using the ON-Bar and ontape backup and restore tools. This tutorial mainly covers the Informix backup and restores tools. Backup and restore tools work on only Informix files (storage spaces and chunks); the tools will not access data stored in other types of files.


An Informix backup is a copy of one or more storage spaces and logical logs that the database server maintains. Data will be saved in the Informix pages format. You might need more than one set of backup data to bring the database server back online.


Informix restore re-creates database server data from backed-up storage spaces and logical-log files. A restore copies data from the backup media back to disk and brings the storage spaces to a consistent state by applying transactions in the logical logs backup.

Recovery tools and utilities available with Informix

Informix has two main backup and restore tools: ON-Bar and ontape. Additional Informix tools for backup and recovery include archecker, which can be used either to check an archive or to restore tables from an archive, and Informix Storage Manager (ISM), which is a simple storage manager. Because ON-Bar includes a storage manager, this ensures that customers can use ON-Bar out of the box without the need to have a separate storage manager.

Using ON-Bar and ontape tools

Both ON-Bar and ontape perform backup and restore of storage spaces and logical logs. At the Informix-server level, both tools work same way: the internal logic to find the data that has to be saved or restored is the same, and both tools start ontape and arcbackup threads within the instance. The major differences between both tools are the way the data is saved and how the tools are monitored during backup and restore. Both ON-Bar and ontape support a feature called external backup, which enables the use of operating system methods such as mirroring to get a faster backup or restore of the data.


ON-Bar does not send data directly to a storage medium. Instead it sends data to a storage manager that handles the data. The storage manager is responsible for placing data in a location and finding it again or retiring the data after a period of time.

ON-Bar can work with storage managers that support the X/Open Backup Services Application (XBSA) programmer's interface standard. The Informix Storage Manager (ISM) is shipped with the product to enable ON-Bar backups without the need to purchase a third-party storage manager (for example, Tivoli Storage Manager (TSM), Omniback, Legato Networker, or Veritas NetBackup). ON-Bar and the XBSA shared library must be compiled on the same platform (32-bit or 64-bit). ON-Bar uses the sysutils database to keep track of backup data sent to the storage manager.

ON-Bar features include the following:

  • Parallel backup and restore
  • Point-in-time restores
  • Integrated backup verification command
  • Easily integrated into an existing storage-manager solution


The ontape tool is a simple backup and restore utility. It saves the data from the database server to a tape or disk. Because there is no storage manager involved, it is easy to implement additional backup methods such as backup to STDIO (standard output) or backup to the cloud. It performs backups and restores serially.

Ontape features include the following:

  • Simplicity and ease of use
  • Backup capability to tape, file, or directory
  • Backup capability to STDIO
  • Backup to the cloud

Ontape and ON-Bar backups are not interchangeable. For example, you cannot create a backup with ontape and restore it with ON-Bar.

Understanding the archecker utility

The archecker utility performs the following:

  • Verifies that all pages required to restore a backup exist on the media in the correct format
  • Restores a single table from a backup

For the backup verification, two modes are available: an integrated mode within ON-Bar (onvar -v) or a stand-alone mode, where archecker is called directly. The archecker utility verifies ON-Bar standard and whole-system backups. The archecker utility cannot verify logical-log backups.

For the table-level restore with archecker, you need a schema file of the table. Archecker goes through the storage space and logical log backup files and finds rows that belong to the table. It recreates those rows and inserts them into the table defined in the schema file.

Understanding ISM

ISM is a simple storage manager that can be used with ON-Bar to back up an Informix instance. You need to configure ISM before it can be used for backup. The onconfig parameter must be set properly to define the correct XBSA library for ISM.

Employing recovery strategies

Recovery strategies require planning with an understanding of the business critical data, the backup tools available, and the tools' capabilities. The first step for planning for recovery is to outline recovery goals based on the understanding of the business data. Define the recovery goal for your business in terms of tolerance for data loss and acceptable time loss. Following are some key questions that can help you to plan recovery process:

  • How much data loss, if any, is acceptable?
  • How long can your business function without the data?
  • How long can your production system be down during a restore?
  • How much transaction time can be lost?
  • How much budget is available for a recovery plan?

There will be most likely a trade-off between cost and speed of recovery. The faster you want to recover an instance, the more complex a recovery plan is needed, and the more hardware resource must be available.

Once you have set the recovery goals, identify the IBM Informix backup and recovery tools. Recovery planning options should include solutions as the following:

  • IBM Informix Backup utilities (ontape, ON-Bar)
  • Load/unload utilities
  • High performance loader
  • dbexport/dbimport
  • External tables
  • SDS, HDR, or RSS
  • External backup and restore using disk mirroring
  • Backup to the cloud

The challenge is to weigh the advantages and disadvantages of speed and cost. Following is an effective approach:

  1. Outline the many possible situations that will require recovery.
  2. Categorize the failure based on severity.
  3. Develop a recovery plan for multiple levels of failure.

Here are some examples of multiple-level failures that can help you determine the severity of the failure:

  • Unintended deletion of a database object (rows, columns, tables)
  • Unintended deletion of a server object (databases, chunks, storage space)
  • Data corruption or incorrect data created
  • Hardware failure (such as failure of a disk that contains chunk files)
  • Database server failure
  • Natural disaster

In those scenarios, you need backup methods and tool in addition to Informix backup to get your data back.

Backing up data with Informix

This section describes various types of backups that might be performed using the tools provided with Informix.

Performing a physical backup

The physical backup process includes all or selective storage space backup. You can perform this backup while database server is in online, quiescent, or single-user mode. A physical backup (or standard backup) archives only used pages in each storage space instead of all the allocated pages in the storage space. Both the ontape and ON-Bar utilities can be used with physical backup.

A physical backup does not archive temporary storage spaces and mirror chunks while primary chunks are available. A physical backup also does not archive the storage space pages that are allocated to the database server but not yet allocated to a tablespace extent.

The ontape physical backup does not include any database configuration or sqlhosts files. You need to use operating system commands to back up those files separately. Starting with Informix 11.70, the ON-Bar utility automatically backs up onconfig, ixbar, and oncfg files and keeps them inside the ON-Bar archive itself. However, to take advantage of this new ON-Bar feature, your storage manager must be compatible with Informix 11.70.

The following sections describe physical backups.

Full, level-0 backup

A level-0 archive contains a copy of all data in the storage space of the Informix instance at the time the archive started. Level-0 backups can be time-consuming, because all the used disk pages need to be written to backup media. A level-0 backup is needed for any complete restore, but you can do incremental backups (level-1 and level-2) that refer to a level-0 backup. You can consider an external backup to decrease the time needed to do a level-0 backup.

Level-1 backup

A level-1 backup copies only data that have been changed since the last level-0 backup. It will likely take less space and less time than a level-0 backup, but this depends on the amount of data changed in the mean time. A level-1 backup needs the level-0 backup it refers to for restore.

Level-2 backup

A level-2 backup refers to a level-1 backup, and it saves only that data that have been changed since the last level-1 backup. It will likely take less space and time than a level-1 backup, but this also depends on the amount of data change. A level-2 backup needs both the level-1 and the level-0 backups it refers to perform a complete restore.

Serial and parallel backups

As the name implies, a serial backup writes the storage spaces serially to disk, so each storage space is written one at a time. This is the only mode ontape can do. ON-Bar also backs up storage spaces serially when the BAR_MAX_BACKUP configuration parameter is set to 1.

During a parallel backup, more than one storage space is backed up at the same time. Since Informix 11.10, all ON-Bar backups are done in parallel unless the BAR_MAX_BACKUP configuration parameter is set to 1. ON-Bar automatically sorts the storage spaces to perform a backup in the most efficient way.

A standard ON-Bar backup completes a parallel backup of selected or all storage spaces, which is achieved with onbar -b options, without -w. In a standard ON-Bar backup, the database server performs a checkpoint for each storage space as it is backed up. The logical log backups must be restored from a standard ON-Bar backup.

A whole-system backup defined by the -w option is a backup that automatically includes the necessary logical log records of the transactions open at the time of archive checkpoint. A whole system restore can restore to a consistent point without any explicit logical log backups and restore.

A whole system backup can be done in parallel. The rootdbs is still backed up first and by itself. Then, the rest of the storage spaces are backed up in parallel, based on the BAR_MAX_BACKUP configuration setting. In a whole system ON-Bar backup, the database server performs a single checkpoint for all of the storage spaces being backed up.

Backup filters

Ontape, ON-Bar, and archecker enable using a filter plug-in with the backup or restore operation. A filter plug-in can be an operating system command or an application module to transform data during backup or restore. For example, a backup filter can encrypt data before it leaves the database server for the storage medium and decrypt data during restore. A filter can also be used to perform other operations, such as compression.

The configuration parameter BACKUP_FILTER enables you to specify a filter program that operates on the backup data. The specified filter is automatically applied to the backup data the database server delivers before they are processed by the ontape or ON-Bar backup utilities. Similarly, the configuration parameter RESTORE_FILTER enables the data to be transformed back appropriately while restoring data. Listing 1 shows an example of BACKUP_FILTER and RESTORE_FILTER settings.

Listing 1. Configuration parameters set to a filter for backup and restore
 On UNIX and Linux platform:
 BACKUP_FILTER      /usr/bin/gzip 
 RESTORE_FILTER     /usr/bin/gzip -d -c

 On Windows platform:
 BACKUP_FILTER      c:\cygwin\bin\gzip.exe
 RESTORE_FILTER     c:\cygwin\bin\gunzip.exe

You can use filters with archecker while verifying backup or extracting data. The AC_RESTORE_FILTER archecker configuration parameter must be set appropriately before accessing backup data that has been processed by a backup filter.

Tthe configuration parameter RESTORE_FILTER and the archecker configuration parameter AC_RESTORE_FILTER must be set to the same value. External backup and restore operations do not support backup and restore filters.

Fake backup

The fake backup is not a true backup. During a fake backup, the database server does not write any data to the storage device. Because no backup actually occurs during a fake backup, the operation is very fast. A fake backup is useful when you need to change logging modes on a database, to activate newly created blobspaces, or to make tables available for access after the high performance loader (HPL) has loaded data in express mode. No restore is possible from a fake backup.

Take extra precaution when you perform fake backups in a production environment. You cannot restore data from a fake backup. You might not be able to do a continuous restore to the current point in time.

Performing a logical log backup

A logical log backup is the process of copying the contents of a logical log file to a storage media. The logical logs store records of checkpoints, administrative activities such as Data Definition Language (DDL) statements, and transaction activity for the databases in the Informix instance. Each Informix instance has a finite number of logical logs.

An Informix instance uses the logical logs in a circular method. Records are written to the logical log files serially. When the first log fills up, Informix begins writing to the second log, and so on. When all the logs have been used, Informix begins writing to the first log again. Before Informix can reuse a log, all its data must be backed up, and there must be a checkpoint in a newer log.

In the case of a buffered, unbuffered, or ANSI mode database logging, all inserts, updates, and deletes to the tables are recorded in the logical log. They will be used either for fast recovery after an abnormal shutdown or during the logical recovery part of a restore. Therefore transaction activity must continually be recorded, and it must be maintained until the next storage space or whole system backup is performed. All parallel backups require logical log backups for a successful restore, except whole-system backups.

Manual, continuous, and automatic logical log backups

As logical log files are needed to ensure the consistency of the instance in case of a failure or a restore, the database server hangs if all logical logs are filled and if there is no backup. The logical log files will only be overwritten if they marked with the B flag, which represents a backed-up logical log.

There are several different ways to backup the logical logs. You can run logical log backups manually (on-demand) by an administrator or operator. The backups can run continually using a continuous log backup or can be automatically triggered using the ALARMPROGRAM configuration parameter.

On-demand manual logical log backups are performed when an administrator or operator executes a log backup request using either ON-Bar or ontape. A manual logical-log backup backs up all the full logical log files, and it stops at the current logical log file.

Continuous logical log backups are typically used if you use ontape as your backup and restore utility. The ontape -c option continuously backs up each logical log as it fills or as the server switches to the next log. Continuous logical log backups require a dedicated terminal and backup device.

With automatic logical log backup, each time a logical log file becomes full, the database server automatically calls the specified program configured using the ALARMPROGRAM configuration parameter. By default, the ALARMPROGRAM calls the $INFORMIXDIR/etc/ script (%INFORMIXDIR%\etc\alarmprogram.bat on Windows platform). The default scripts use the onbar -b -l command to perform logical log backups if the BACKUPLOGS variable is set to Y in the scripts. Set BACKUPLOGS=Y in the default alarmprogram script to trigger automatic logical log backup.

Ontape can be set to do the backups to a directory. You can also easily implement ontape to use the ALARMPROGRAM to back up the logical logs. In this case, the script should use ontape -a -y. Remember to set LTAPEDEV to a directory. If LTAPEDEV is set to a file, the successive logical log backups will fail with a message that the file already exists.

Logical log salvage

Before a restore, if logical logs that have not yet been backed up are found on a disk, they can be salvaged. This will be explained in Completing restores with Informix.

No logical log backup

You can set the onconfig parameter LTAPEDEV to /dev/null (NUL on Windows), if you are not interested in backing up your logical logs. This treats the logical logs as free as soon as they get full. Discarding your logs makes it impossible to perform a logical restore. In the case of a hardware or software failure, you would be able to restore the data only if you used ontape or have done a whole system backup with ON-Bar.

Completing restores with Informix

During an Informix restore, the data is copied back to the storage spaces from a backup. Following are some scenarios that might require a restore:

  • The entire server system is unavailable (you cannot bring the server to online mode)
  • A critical storage space, such as the root dbspace or the storage space that holds the logs, is unavailable (one or more of the chunks are marked as down)
  • A non-critical storage space is unavailable (one or more of the chunks and their associated mirror chunks are marked as down)

Understanding restore types

Physical and logical restores

A physical restore is the process of restoring the storage spaces that backed up with level-0, level-1, and level-2 backups. The physical restore process is very efficient, because it simply copies page images from backup media and places them on disk. A physical restore needs a level-0 backup. The level-1 and level-2 backups are optional and can be applied after the level-0 backup.

Logical restore is the process of further restoring the Informix instance using transactions found on logical log backup. The logical restore occurs after the physical restore. Because the logical restore does not copy pages, but rather replays transaction records, it is slower and less efficient than the physical restore process.

Cold restore, warm restore, and mixed restore

A cold restore occurs when the root dbspace or a storage space that holds the physical or logical logs is inaccessible. The database server must be in offline mode while performing a cold restore. Whole system restores are always cold restores, because all the storage spaces are being restored, including the critical storage spaces.

A warm restore occurs when the root storage space and the storage spaces holding the logical and physical logs are not affected. The Informix instance must be in online, quiescent, or fast recovery mode while performing a warm restore. If you want to restore only non-critical storage space while the engine is online, you can do a warm restore. A warm restore requires the rolling forward of the logical logs (logical restore) up to the current point in time so that the storage spaces are in a consistent state with the rest of the storage spaces.

A mixed restore is a combination of a cold restore of only the critical storage spaces followed by a warm restore of the non-critical storage spaces. While a mixed restore makes the critical data available sooner, the complete restore takes longer because the logical logs are restored and replayed several times: once for the initial cold restore and once for each subsequent warm restore.

A mixed restore is desirable, for example, when a server contains a root storage space, a logical log storage space, 3 one-hundred-gigabyte storage spaces containing business critical financial data, and 50 one-hundred-gigabyte storage spaces containing history data. It might be beneficial to recover the critical storage spaces and the 3 storage spaces containing the business critical data quickly using a cold restore. Once these storage spaces are restored and the system is available to users, the less important history data can be restored using a warm restore. While the total restore time for the system will take longer because the logical logs have to be applied twice and the warm restore has to share hardware resources with active users, a mixed restore allows business to resume more quickly.

Parallel restore and serial restore

During a parallel restore, multiple processes are spawned to restore multiple storage spaces in parallel. Parallel restores are also referred to as storage space restores. Storage space restores can be used to restore a single storage space, multiple storage spaces, or an entire Informix instance. Parallel restore is only available with ON-Bar. With ON-Bar, storage space restores are performed in parallel (unless BAR_MAX_BACKUP is set to 1). If you perform a restore using the onbar -r command, ON-Bar restores the storage spaces in parallel and replays the logical logs once.

With ontape, all restores are serial restores, because they restore one storage space at a time.

Renaming chunks during restore

You can rename chunks by specifying new chunk paths and offsets during a cold restore. This option is useful if you need to restore storage spaces to a different disk than the one on which the backup was made. You can rename any type of chunk, including critical chunks and mirror chunks.

This type of restore performs the following validations to rename chunks:

  • It validates that the old chunk pathnames and offsets exist in the archive reserved pages.
  • It validates that the new chunk pathnames and offsets do not overlap each other or existing chunks.
  • If renaming the primary root or mirror root chunk, it updates the ONCONFIG file parameters ROOTPATH and ROOTOFFSET, or MIRRORPATH and MIRROROFFSET. The old version of the ONCONFIG file is saved as ONCONFIG.localtime.
  • It restores the data from the old chunks to the new chunks (if the new chunks exist).
  • It writes the rename information for each chunk to the online log.

If any of the validation steps fail, the renaming process stops, and ON-Bar writes an error message to the ON-Bar activity log.

It is recommended that you perform a level-0 archive after you rename chunks, otherwise you will have to restore the renamed chunk to its original pathname and then rename the chunk again.

If you add a chunk after performing a level-0 archive, that chunk cannot be renamed during a restore. Also, you cannot safely specify that chunk as a new path in the mapping list.

Salvaging logs during a restore

The process of backing up the logical log files off the disk to storage media before doing a restore is called logical log salvage. A restore might be needed after a system failure; however some logical log data might not have been backed up to backup media yet. This data must be salvaged, because it will be required to recover the system to the point in time of the failure. A cold restore after a system failure automatically attempts to salvage any logs, but you also have the option of manually salvaging the logs before the cold restore. The ON-Bar salvage log command is onbar -l -s. The ontape salvage log command is ontape -S.

The onbar -r command automatically salvages the logical logs. Use combination of the onbar -r -p and onbar -r -l commands, if you want to skip log salvage. With ontape, you can skip log salvage during a restore by answering No to the question Do you want to backup the logs?.

Using point-in-time or point-in-log restores

A point-in-time restore is a restore that recovers the data back from level 0,1, and 2 backups and rolls forward the logical logs up to a specific point in time or up to a specific logical log. A point-in-time restore enables you to restore the database server to the state it was in at a particular point in time. A point-in-time restore is always a cold restore that you can use to undo mistakes that might otherwise not be fixable.

An example of such a mistake is accidentally dropping a table. A complete system restore will restore the table during the physical restore, but it will be dropped again during the logical restore. A point-in-time restore lets you restore the data to the moment just before the table was dropped. When you restore the database server to a specific time, any transactions that were uncommitted at the specified point in time are lost. Also, all transactions after the point-in-time restore are lost.

Using table-level restores

A table-level restore can be performed using the archecker utility. The archecker utility restores tables using a schema command file that was provided during the restore. The schema command file contains the source table to be extracted, the destination tables where the data is to be restored to, and an INSERT statement that links the two tables. Table-level restores are described in more detail under Performing table-level restores with archecker.

Using external backup and restore

An external backup or restore operation is performed external to the Informix server, without using ON-Bar or ontape during the backup or restore. The data is backed up or restored using a third-party tool instead. The database needs to be blocked for an external backup to ensure that all the data is saved for the same point in time. Often the data is mirrored on the disk, and during the block the mirror is split, so that the data can be saved from one of the copies while the other continues to be in use.

Following is a summary of steps required to perform an external backup:

  1. Block the database server using onmode -c block.
  2. Back up all the storage spaces and administrative files using a third-party tool or copy command.
  3. Unblock the database server using onmode -c unblock.
  4. Back up all the logical logs, including the current log, using ontape (ontape -a) or ON-Bar (onbar -b -l -c).

Using ON-Bar configuration, commands, and syntax

This section describes the ON-Bar setup and syntax.

Understanding ON-Bar setup steps

Before using ON-Bar, you need to do following:

  1. Configure the storage manager.
  2. Set appropriate ONCONFIG configuration parameters.
  3. Determine the backup strategy for logical logs. If you will be backing up logs using ON-Bar, then LTAPEDEV must not be set to /dev/null (or NUL on Windows platform).

Remember that ON-Bar backups do not include:

  • Storage space pages that are allocated to extents but currently unused.
  • Pages from mirror chunks, if the corresponding primary chunks are available.
  • Large objects in blobspaces stored on optical platters.
  • Temporary storage spaces.

Starting with Informix 11.70, ON-Bar automatically backs up the onconfig, ixbar and oncfg files and keeps them inside the ON-Bar archive itself. However, to take advantage of this new ON-Bar feature, your storage manager must be compatible with Informix 11.70. If your storage manager is not compatible, run the ON-Bar command with the -cf no option to ignore backing up the critical files. For example, you might use onbar -b -L 0 -cf no.

Configuring the storage manager

You must install and configure a storage manager in order to perform backups and restores using ON-Bar. The storage manager controls the storage devices and media used for backup and restore, while ON-Bar manages data movement and communication with the Informix and storage manager. Refer to your storage manager documentation for configuration information. Note that ON-Bar must use the version of the XBSA library that the third-party storage-manager manufacturer provides.

Remember to specify the absolute path name to set the path name of the XBSA library with the BAR_BSALIB_PATH configuration parameter in the onconfig file. You cannot use a relative path name with BAR_BSALIB_PATH.

Understanding the XBSA library

ON-Bar uses the X/Open Backup Services application programming interface (XBSA) to exchange information with a storage management system. There are two types of information that are exchanged: control data and backup data.

Control data is used to verify that ON-Bar and XBSA are compatible, to ensure that objects are restored to the proper Informix system in the proper order, and to track the history of backup objects. Backup data or restore data is the actual data from the storage spaces, blobspaces, or log files that are being backed up or restored. An XBSA transaction is used to transfer a database object (storage space or logical log) between the Informix server and the Storage Manager through ON-Bar. The transaction is a way to maintain backup and restore data consistency. It guarantees that all the data in a backup or restore object are transferred between the Informix server and the Storage Manager, or no data are transferred. Multiple XBSA transactions are allowed per session, and multiple concurrent sessions are allowed per server.

Learning about the ON-Bar components

This section describes the ON-Bar components.

Sysutils database

ON-Bar uses the sysutils database to manage backup and restore catalog information. The sysutils database is created automatically in the root storage space during database server initialization. ON-Bar updates the sysutils database each time you perform a backup, and it reads needed information during warm restore. Table 1 shows some of the sysutils tables and the information stored in them.

Table 1. Contents of sysutils database
Table nameDescription
bar_actionON-Bar treats each storage space and logical log as a backup object. The bar_action table keeps historical information of backups and restore. It stores all backup and restore actions that are attempted against an object, except during a cold restore.
bar_instanceEach successful backup updates this table. ON-Bar uses this information during restore.
bar_objectKeeps a list of all storage spaces and logical logs for each database server for which at least one backup attempt was made.
bar_serverTracks the Informix database server associated to backup objects.

Emergency boot file

The emergency boot file contains all the information in the sysutils database for each backup object. ON-Bar uses this information during cold restore when the Informix server is offline (therefore sysutils database is inaccessible). The emergency boot file is updated after each successful physical or logical log backup. The emergency boot file resides in the etc subdirectory under INFORMIXDIR. The file name is ixbar.servernum, where servernum is the value of the SERVERNUM configuration parameter. The file has its own format; altering it can cause ON-Bar to malfunction. Listing 2 shows an example of an ixbar file.

Listing 2. Example of an emergency boot file
ids11  rdbs R  0  1  0  766  0 2011-07-01 09:10:49  1  31 845343 1 0  - - 31 1449  4128  1
ids11  16   L  0  2  0  767  0 2011-07-01 09:10:51  1  0  0      2 0  - - 16 5711  0     0
ids11  17   L  0  3  0  768  0 2011-07-01 09:10:52  1  0  0      3 0  - - 17 4928  5711  0

Table 2 describes each column in the emergency boot file.

Table 2. Emergency boot file format
Server nameThe Informix database server name
Storage space name or log unique ID The name of the storage space or the logical log unique ID that was backed up
Object typeType of the backup object. B=storage space contains BLOB, CD=critical storage space, ND=non-critical storage space, R=critical storage space, L=logical log
Whole archiveA Boolean value indicating whether this is a whole system backup (a backup taken with the -w flag)
Action IdA serial value that links the entries in the sysutils database together
LevelThe physical backup level, such as 0, 1, or 2
Copy ID highThe high value of storage manager copy ID
Copy ID lowThe low value of storage manager copy ID
Start dateStart date and time of the backup object
Bar versionThe ON-Bar version (currently 1)
First logThe first log required to restore a storage space
Insert timeThe archive checkpoint time
Required action IDIf this is non-zero value, this action ID must be replayed before the current action ID (a prerequisite action ID).
VerifiedA Boolean value indicating that the object has been verified
Verify dateThe object verification date
Checkpoint logLog contains the archive checkpoint
Seal timeThe time that the log was closed
Previous seal timeThe time that the previous log was closed
Backup orderA Boolean value indicating the backup order of storage space based on the size

Informix provides the onsmsync utility to synchronize the contents of the sysutils database, the emergency boot file, and the storage manager catalogs. The onsmsync can be used to purge backup information that is no longer needed.

Using the ON-Bar configuration parameters

There are several configuration parameters that used with ON-Bar. Table 3 describes the list of configuration parameters ON-Bar uses.

Table 3. ON-Bar backup and restore configuration parameters
Parameter Description
BAR_ACT_LOGSpecifies the full path name with an existing directory of ON-Bar activity log file. The activity log is updated with appropriate messages and errors during ON-Bar backup and restores activities.
BAR_DEBUGON-Bar provides a mechanism to debug backup and restore failure. This parameter defines the debugging level, which determines the detail of debugging information generated during an ON-Bar operation. The higher value generates more detailed information. The range of values is 0-9. The default value is 0, meaning that debugging is not enabled. Enabling BAR_DEBUG has a significant impact on ON-Bar performance. The debug level can be changed dynamically while ON-Bar is running. This can save significant time and disk space by setting high debugging levels only when needed.
BAR_DEBUG_LOGSpecifies the full path name with an existing directory of the ON-Bar debugging log file name.
BAR_MAX_BACKUPThis configuration parameter defines the degree of parallelism by determining the number of backup and restore processes to run concurrently, including processes for backup and restoring a whole system. The backup and restore performance can be significantly altered by setting this parameter. Once the BAR_MAX_BACKUP number of running processes is reached, further processes start only when a running process completes its operation. The BAR_MAX_BACKUP setting determines the number of backup streams that the storage manager sees, so consult your storage manager administrator for the appropriate value. The BAR_MAX_BACKUP can be set to -1 to change the order of storage spaces during backup and restore based on storage space number rather than the size.
BAR_RETRYSpecifies the number of times ON-Bar should retry a backup or restore operation. Table-level restore can significantly reduce downtime when just a single or few tables need to be restored. The default value is 1.
BAR_NB_XPORT_COUNTSpecifies the number of shared-memory data transfer buffers for each backup or restore process to exchange data with the Informix server. The value of this parameter affects ON-Bar performance.
BAR_XFER_BUF_SIZESpecifies the size of the transfer buffer in pages. Buffer size must be the same for backup and restore. This value should be set to the maximum allowed for your system (31 for 2K-base-page-size database server systems and 15 for 4K-base-page-size database server systems).
BAR_PROGRESS_FREQSpecifies the frequency of the progress messages in the bar activity log for backup and restore operations. The default value of 0 means that ON-Bar does not write any progress messages to the bar activity log. This value cannot be less than 5 minutes for progress monitoring.
BAR_PERFORMANCESpecifies the type of performance statistics to report in the bar activity log for backup and restore operations. Valid values are 0, 1, 2 or 3. 0=turn performance monitoring off (default); 1=display the time spent transferring data between the server and storage manager; 2=display sub-second accuracy in the timestamps; 3=display both timestamps and transfer statistics
BAR_HISTORYSpecifies whether the sysutils database maintains a backup history when the onsmsync tool is used to expire old backups. Valid values are 0 or 1. 0=remove records for expired backup objects from sysutils database; 1=to keep records for expired backup objects in the sysutils database
BAR_BSALIB_PATHSpecifies the shared library for the different storage managers. The storage manager and ON-Bar rely on the shared library to interact with each other.
BAR_SIZE_FACTORAugments the estimate of a storage space size before the information is passed to the storage manager. Generally, a backup is done online; therefore, the number of pages can change during the backup. Some storage managers will fail if the actual backup size is much larger than the estimate provided. The value of BAR_SIZE_FACTOR is taken as a percentage of the original storage space size and then added to the estimate before communicating it to the storage manager.
BAR_CKPTSEC_TIMEOUTAn external backup on a remote stand-alone (RS) secondary server can fail if the checkpoint on the primary server does not complete within the time-out period. You can increase the timeout period by setting this configuration parameter to a higher value.

The three configuration parameters that most significantly affect ON-Bar performance are:


Backing up Informix data with ON-Bar

You can perform the following types of backups with ON-Bar:

  • Standard backup
  • Whole-system backup
  • Physical backup
  • Logical-log backup

Standard ON-Bar backup

In a standard backup, the database server performs a checkpoint for each storage space as it is backed up. You must restore the logical logs from a standard backup for data consistency. You can set the number of parallel ON-Bar processes to run with the BAR_MAX_BACKUP configuration parameter, or you can force a standard backup to run as a serial backup by setting it to 1. ON-Bar supports both full (level-0) and incremental (level-1 and level-2) standard backups of storage spaces. By default, ON-Bar performs a level-0 backup. Table 4 shows some examples of standard ON-Bar backup.

Table 4. Standard ON-Bar backup
onbar -b -L 0Perform a standard level-0 backup of all online, non-temporary storage spaces and used logical logs
onbar -b -L 0 dbs1 dbs2Perform a standard level-0 backup of specific storage spaces (for example, two storage spaces named dbs1 and dbs2) and logical logs
onbar -b -L 1Perform a standard level-1 incremental backup

Whole-system ON-Bar backup

A whole-system backup (onbar -b -w) is a serial or parallel backup of all storage spaces and logical logs based on a single checkpoint. A whole-system backup can be restored without the logical logs, because the data in all storage spaces is consistent in this backup. You can perform an incremental (level-1 or level-2) whole-system backup in conjunction with a level-0 whole-system backup. Table 5 shows example commands for whole-system ON-Bar backups.

Table 5. Whole-system ON-Bar backup
onbar -b -w -L 0Performs a standard level-0 backup of all online, non-temporary storage spaces and used logical logs
onbar -b -w -L 1Performs a level-1 whole-system backup

ON-Bar physical backup

A physical backup backs up only storage spaces. You can perform a physical backup of specific or all storage spaces using ON-Bar. For example, onbar -b -p performs a physical backup of all storage spaces.

ON-Bar logical-log backup

You must back up logical logs if you perform standard backup because you must restore both the storage spaces and logical logs. Table 6 shows the types of logical-log backup.

Table 6. ON-Bar logical-log backup
onbar -b -lPerforms a backup of full logical-log files manually
onbar -b -l -cPerforms a backup of current logical-log file as well as other full logical-log files
onbar -b -l -CStarts a continuous logical-log backup

Logical-log salvage

ON-Bar backs up logical logs automatically before it restores the root storage space in a cold restore except when only a physical restore is specified. To avoid data loss, if the device containing the logical logs is still available or if you plan to perform a physical restore only, manually salvage the logical logs before starting the cold restore. For example, you can use onbar -l -s to salvage the logical logs manually.

Verifying ON-Bar backups

You can use examples from Table 7 to verify ON-Bar backups. The logical logs are not verified.

Table 7. Verifying ON-Bar backups
onbar -vVerifies a backup of all storage spaces. The logical logs are not verified. This is applicable for standard and whole-system backup.
onbar -v -f dbsfileVerifies the backup of storage spaces listed in a file called dbsfile.
onbar -v -t "<YYYY-MM-DD HH:MM:SS>"Performs a point-in-time verification.

Fake ON-Bar backup

A fake backup is useful when you need to change logging modes on a database, to activate newly created blobspaces, or to make tables available for access after the high performance loader (HPL) has loaded data in express mode. No restore is possible from a fake backup. The onbar -b -F command performs a fake backup. You can run the command whether or not a storage manager application is present. If any storage spaces are specified, they are ignored.

Restoring with ON-Bar

You can perform the following types of restores with ON-Bar:

  • Cold restore
  • Warm restore
  • Mixed restore
  • Logical-log restore
  • Restartable restore

ON-Bar cold restore

If the database server fails because a critical storage space is damaged due to a disk failure, you must perform a cold restore of all critical storage spaces. The database server must be offline for a cold restore. A cold restore first restores all critical storage spaces, then the non-critical storage spaces, and finally the logical logs. After the restore completes, the database server goes into quiescent mode. It can be brought online using the onmode command. You also need to perform a cold restore for one of the following tasks:

  • Whole-system restore
  • Point-in-time restore
  • Point-in-log restore
  • Imported restore
  • Rename-chunks restore

A whole-system restore requires a whole-system backup, though it does not require you to restore the logical logs. If you perform a physical-only whole-system restore, the database server comes into fast recovery mode after restore completes. Either perform a logical restore or bring the server online using the onmode command. Table 8 shows examples of ON-Bar whole-system restores.

Table 8. Whole-system restore
onbar -r -w Performs a whole-system restore with automatic log salvage
onbar -rPerforms standard restore of the whole-system backup
onbar -r -p -wPerforms a physical-only whole-system restore (no log salvage)
onbar -r -t <time> -w Performs a whole-system point-in-time restore

A point-in-time restore enables the database server to be restored to a state it was in at a particular point in time. It is typically used in recovering from a mistake such as dropping a database accidentally. In this case, you can restore the server to a point in time just before you dropped the database. For example, the onbar -r -t "<YYYY-MM-DD HH:MM:SS>" command performs a restore to transactions that were committed on or before the specified time.

A point-in-log restore is similar to a point-in-time restore. It restores data to the time of the last committed transaction listed in the specified logical log. The onbar -r -n "<Log Number>" command performs a point-in-log restore to a specified log number.

In an imported restore, the data is restored to a different database server instance than the one from which the data was backed up. The XBSA and storage manager versions must be compatible between backup and restore operations.

For a rename-chunks restore, you can rename chunks by specifying new chunks paths and offsets during a cold restore. This option can be used to restore storage spaces to different path names than the one on which the backup was performed. You need to perform a level-0 archive after the rename-chunks restore completes. For example, the onbar -r -rename -p /chunk_old -o 0 -n /chunk_new -o 20000 command renames chunk (path: /chunk_old and offset: 0) to (path: /chunk_new and offset: 20000).

ON-Bar warm restore

You can restore a non-critical storage space in a warm restore if the storage space is down and the database server is not offline. Table 9 shows examples to perform a warm restore.

Table 9. Warm restore
onbar -rPerforms a warm restore of all downed storage spaces
onbar -rPerforms a warm restore of all downed storage spaces
onbar -r -p -wPerforms a physical-only whole-system restore (no logsalvage)
onbar -r -t "<YYYY-MM-DD HH:MM:SS>" -wPerforms a whole-system point-in-time restore

A warm restore needs to roll forward the logical logs to the current logical log, so it cannot be used to restore accidentally dropped tables or deleted data.

ON-Bar mixed restore

A mixed restore is a cold restore of all critical storage spaces followed by a warm restore of remaining storage spaces. Because not all storage spaces are restored during the initial cold restore, the server can be brought online faster than if you were to perform a cold restore of all the storage spaces.

ON-Bar logical-log restore

To perform a logical-log restore, use the onbar -r -l command. The logical log files are replayed using a temporary space during a warm restore. Make sure that you have enough temporary space for the logical restore.

ON-Bar restartable restore

If a failure occurs during a restore, you can restart the restore from the place where it failed. The RESTARTABLE_RESTORE parameter is ON by default. If the failure occurs during a physical restore, ON-Bar restarts the restore at the storage space at the level where the failure occurred. If a failure occurs during a cold logical restore, ON-Bar restarts the logical restore from the latest checkpoint. Restartable restore does not work for the logical part of a warm restore. To restart a failed restore, issue the onbar -RESTART command.

Viewing the ON-Bar activity log

Whenever a backup or restore activity or error occurs, ON-Bar writes a brief description to the activity log. In the onconfig file, set the parameter BAR_ACT_LOG to specify the location of the ON-Bar activity log file.

You can view recent ON-Bar activity using the ON-Bar command. Only users who have permission to perform backup and restore operations can use this ON-Bar option. By default, it displays the last 20 lines from the ON-Bar activity log. You can change the number of lines by providing additional parameters with the command. It also displays information in recursive mode. Table 10 shows examples of ON-Bar commands to display the contents of the ON-Bar activity log.

Table 10. Activity log commands
onbar -mDisplays last 20 lines from the ON-Bar activity log
onbar -m 40 -r 3Displays the last 40 lines from the ON-Bar activity log in 3 seconds intervals

Viewing logical logs with ON-Bar

ON-Bar enables you to view logical logs that have already been backed up. This means that you can view logical logs that have been backed up and no longer exist on the database server. This is similar to using the onlog utility to view the logical logs that have been backed up by the ontape utility. The storage manager must be available to view the backed up logical logs. The output of this command is displayed to stdout.

You cannot view logical logs that have not yet been backed up with the ON-Bar command. You must use the onlog utility to view logical logs that are still available on the database server disk. For example, the onbar -P -n 289 command displays a long listing of logical log records from a log unique number 289.

Verifying backups with ON-Bar will be covered in the archecker section of this tutorial.

Understanding ontape configuration, commands, and syntax

This section describes the ontape utility.

Configuring the ontape utility

The ontape utility uses six parameters in the ONCONFIG file to create storage-space and logical log backups. Table 11 describes the configuration parameters ontape uses.

Table 11. Ontape backup and restore configuration parameters
TAPEDEVSpecifies the tape device, directory, or file name that is used to back up and restore storage spaces. To configure ontape to use standard I/O, set TAPEDEV to STDIO.
TAPEBLKSpecifies the block size of the device that is used for writes during a storage-space backup. The block size must be the same during backup and restore.
TAPESIZESpecifies the maximum size of the device that is used for backup and restore. Set it to 0 to use full tape capacity. Tape size cannot be set to 0 for remote devices.
LTAPEDEVSpecifies the tape device, directory, or file name that is used for logical log backup and restore.
LTAPEBLKSpecifies the block size of the device that is used for writes during logical log backup and restore. The block size must be the same during backup and restore.
LTAPESIZESpecifies the maximum size of the device that is used for logical log backup and restore. Set it to 0 to use full tape capacity. Tape size cannot be set to 0 for remote devices.

If TAPEDEV is pointing to a tape device, it will automatically rewind after each command completes. Before reading from or writing to a tape, the database server performs a series of checks that require the rewind. If TAPEDEV or LTAPEDEV are set to a file, this file will be overwritten by subsequent backups, but it will warn you before doing so.

Setting TAPEDEV or LTAPEDEV to a directory will make sure that each physical or logical log backup is written into new files.

Setting LTAPEDEV to /dev/null on UNIX or to NUL on Windows turns off logical log backups. The logical logs will automatically be marked as saved, so that they can be overwritten. If the logs are not saved, a logical restore is not possible.

Using ontape to back up storage spaces and files

This section describes how to use ontape to back up storage spaces and logical log files.

Storage space backup

The ontape utility supports level-0, level-1, and level-2 backups of storage spaces. It backs up the storage spaces in the following order: root storage space, physical and logical log storage space, blobspaces, smart blobspaces, and other storage spaces.

Before you begin to create a backup with ontape, ensure that the location specified by the TAPEDEV parameter is write-enabled. Table 12 shows examples of commands for storage space backup using ontape.

Table 12. Storage space backup
ontape -s -L 0A level-0 backup to tape
ontape -s -L 0 -d A level-0 backup to a directory without prompts
ontape -s -L 0 Level_0_backup -t STDIOA level-0 backup to standard output file named Level_0_backup in current directory

Logical-log backup

You can use ontape to back up all full logical logs, which are known as manual logical log backups. Ontape can instead start a continuous log backup in which the database server automatically backs up each logical log file when it becomes full. Hence, you will never lose more than a partial logical log file. It does not back up the current logical log file.

Table 13 shows the commands to perform a logical log backup.

Table 13. Logical log backup
ontape -aManual logical log backup
ontape -c Continuous logical log backup

You can also create a continuous logical log file backup to a directory. To end the continuous logical log backup, press Ctrl+C to interrupt.

Restoring Informix data with ontape

This section describes how to use ontape to restore storage spaces and logical log files.

Cold restore using ontape

The database server must be offline to perform a cold restore. You can salvage the logical logs when the cold restore starts. The ontape utility prompts you to salvage the logical logs. The ontape utility then prompts you to mount the tapes with backup data. When restoring from a directory, ontape prompts you for the path name of the directory. You can avoid the prompt by using the -d option.

At the end of restore, the database server remains in quiescent mode and can be switched to online mode. Table 14 shows examples of cold restore using ontape.

Table 14. Cold restore
ontape -rRestores all storage spaces
cat Level_0_backup | ontape -pPerforms a physical restore from the standard input file called Level_0_backup.

Rename-chunks restore using ontape

You can rename chunks during a cold restore with ontape. However, make sure you perform a level-0 backup after the rename-chunks restore completes. Listing 3 shows an example.

Listing 3. Rename-chunk restore
ontape -r -rename -p /chunk_old -o 0 -n /chunk_new -o 20000

The command in Listing 3 renames a chunk (path: /chunk_old and offset: 0 to path: /chunk_new) and sets offset to 20000.

Warm restore

You can perform a warm restore only on non-critical storage spaces. You can warm restore selected storage spaces (for example dbspace1 and dbspace2) after restoring critical storage spaces as part of cold restore. Table 15 shows examples of warm restore using ontape.

Table 15. Warm restore using ontape
ontape -r -D dbspace1 dbspace2Performs warm restore of dbspace1 and dbspace2 storage spaces
cat Level_0_backup | ontape -r -D dbspace1 -t STDIOPerforms a warm restore of dbspace1 from standard input file level_0_backup

Mixed restore

When you perform a mixed restore, you restore only critical storage spaces and, optionally, one or more non-critical storage spaces during the cold restore. Later, you can warm restore non-critical storage spaces. Listing 4 shows a warm restore example.

Listing 4. Mixed restore command for ontape
ontape -r -D rootdbs llogdbs plogdbs
ontape -r -D dbspace1 dbspace2

The first command performs a cold restore of critical storage spaces (rootdbs, llogdbs, and plogdbs). The second command performs a warm restore of other storage spaces (dbspace1 and dbspace2).

Logical-log restore

You must restore all the logical log files backed up after the last level-0 backup when you perform a mixed restore. When you perform a full restore, you can choose whether to restore logical log files. The ontape -l command performs a logical log restore.

Backing up and restoring using standard I/O (STDIO)

Ontape enables you to use STDIO for physical backup and restore operations. During backup, ontape can write data to stdout. During restore, ontape can read from stdin. Ontape uses pipes as an operating-system-provided memory buffer mechanism to perform backup and restore using STDIO. The advantages of using STDIO with ontape are:

  • No read or write operations to storage media are needed (if you choose to directly pipe the backup data to a restore operation).
  • You can use operating system utilities to compress backup data before storage
  • You can pipe the backup data through any utility.
  • You can create a duplicate database server by immediately restoring the data onto another machine, such as setting up an initial HDR (or RSS) secondary server.

Set the TAPEDEV configuration parameter value as STDIO to configure ontape to use STDIO.

The TAPEBLK and TAPESIZE configuration parameters are not used for backing up using STDIO. However, the value of TAPEBLK is still used for the transfer of data between the database server and the ontape process. The TAPESIZE configuration parameter is not used, because the capacity of STDIO is assumed to be unlimited.

Backup using STDIO is written directly to stdout. Therefore, the data stream needs to redirect to a file. Otherwise the data stream is sent to the screen. When redirecting stdout to a file, make sure that there is enough space available in the file system. Error and information messages are written to stderr. Table 16 shows examples of commands to back up and restore using STDIO.

Table 16. Back up and restore using standard I/O
ontape -s -L 0 > /informix/backup/archive_L0Performs level-0 backup using STDIO. The stdout of the ontape command is redirected to a file named archive_L0 in the /informix/backup directory. The command is the same as the standard ontape physical backup, except in this scenario, the operating system handles the data stream to the output file.
ontape -s -L 0 | compress -c > /informix/backup/archive_L0 Performs level-0 backup, where the ontape command is redirected to a pipe, and the data is compressed before it is written to the file names archive_L0 in the /informix/backup directory.

The logical log backup and restore is not supported with ontape using STDIO. However, if there are standard logical log backups available, they can be restored using the ontape -l command following an ontape physical restore using STDIO. Also, logical log salvage is not possible during the restore process. Therefore, you should manually salvage any logs using the ontape -S command before performing an ontape restore using STDIO.

During standard restore, ontape prints information to stdout, but with STDIO, the messages are omitted. Similarly, after restoring a level-0 backup, ontape prompts to restore level-1 and level-2 backups. But during a restore to STDIO, these prompts are omitted. Instead, the input stream is scanned for more data. If more data are found, the next level of backup can be restored. Therefore, all required data must be part of the input stream to the ontape restore command, and the data must be in the correct order. Use the command from Listing 5 to restore level-0 and level-1 backups from files named archive_L0 and archive_L1 in the /informix/backup directory.

Listing 5. Restoring level-0 and level-1 backups using STDIO
 cat /informix/backup/archive_L0 /informix/backup/archive_L1 | ontape -p

The ontape STDIO function enables cloning an Informix database server, or quickly setting up HDR by performing a simultaneous backup to stdout and restore from stdin. If backup and restore is done solely to duplicate an Informix database server, use the ontape -F option to prevent the archive from being saved. During simultaneous backup and restore, although a backup is taken, it cannot be restored at a later time because the backup is not saved to a storage device; the backed up data will be transferred to another system through a pipe, and, using an rsh operation, the data will be immediately restored to another system. Listing 6 shows an example of a command for simultaneous backup and restore.

Listing 6. Simultaneous level-0 backup and restore
ontape -s -L 0 -F | rsh serverB "ontape -p"

The ontape level-0 backup is performed on the local machine. The data are piped to stdout on a remote machine called serverB using the rsh operating system utility, and a physical restore is performed on the remote machine.

The ontape command ignores the -F option if TAPEDEV is not configured as STDIO. The ontape -F backup option with the STDIO configuration means that the archive information is not recorded in the reserved pages.

Using directory for backup and restore

Ontape enables using a directory for backup and restore operations. The advantage of using a directory is that multiple backups can be performed to the same directory. Ontape automatically renames existing backup files in that directory by appending a date and time to the file name so that backup files do not get overwritten.

The configuration parameter TAPEDEV and LTAPEDEV can be set to a file or a directory for physical backups and logical log backups, respectively. If set to a file, the successive backups will overwrite the previous backups. Directories specified for TAPEDEV and LTAPEDEV must exist (with read, write, and executable permissions set) before using the directories for ontape backups.

Ontape automatically generates the file name during backup. The naming conventions are:

  • Physical backup: Hostname_Servernum_Ln
  • Logical log backup: Hostname_Servernum_Logn


  • Hostname identifies the machine name as a prefix.
  • Servernum identifies the value of the SERVERNUM configuration parameter as a prefix.
  • n identifies the backup level or logical log number

You can override the default backup file naming convention by setting the environment variable IFX_ONTAPE_FILE_PREFIX. The value of environment variable IFX_ONTAPE_FILE_PREFIX replaces the Hostname_Servernum prefix in the default file naming convention. For example, if IFX_ONTAPE_FILE_PREFIX is set to Stores, then the physical backup creates backup files named Stores_L0, Stores_L1, and Stores_L2. Similarly, logical log backup creates files named Stores_Log0000000001, Stores_Log0000000002, and so on. During the restore process, ontape searches the physical and logical backup directories for the expected backup file names. During restore, if multiple backups exist in the directory, the restore process uses the most recent backup.

Because all the backup images reside in the same directory, it could be the systems administrator's responsibility to monitor available disk space and the security of the files.

Monitoring and debugging

With ontape, you have only the console and message log to monitor. The ON-Bar utility offers more options to debug and monitor the progress of the backup or restore operation. You can use BAR_ACT_LOG, BAR_DEBUG_LOG, or other configuration parameters to monitor ON-Bar progress.

You can set the level of reporting to be written to the ON-Bar activity log by using the BAR_PERFORMANCE configuration parameter. You can configure the report to contain sub-second timestamps for ON-Bar processing, as well as the transfer rates between ON-Bar and the storage manager, and between ON-Bar and the Informix instance.

The BAR_PROGRESS_FREQ configuration parameter specifies how frequently the backup or restore progress messages display in the activity log (in minutes).

The most valuable tools to debug an ON-Bar specific problem are the BAR_ACT_LOG and the BAR_DEBUG_LOG. Setting BAR_DEBUG to 9 will enable tracing ON-Bar and XBSA calls and generating messages in the BAR_DEBUG_LOG that can be used to trace which call is failing or returning the error encountered.

Using external backup and restore

An external backup or restore operation is performed external to Informix, without using ON-Bar or ontape during the backup or restore. The data is backed up or restored using a third-party tool instead.

Using external backup

Following are the high-level steps required to perform an external backup:

  1. Block the database server using onmode -c block.
  2. Back up all the storage spaces and administrative files using a third-party tool or copy command.
  3. Unblock the database server using onmode -c unblock.
  4. Back up all the logical logs, including the current log using ontape -a (ontape) or onbar -b -l -c (ON-Bar).

Using external restore

Following are the high-level steps required to perform an external restore:

  1. Salvage the logical logs using ontape -S (ontape) or onbar -b -l -s (ON-Bar).
  2. Restore all the storage spaces from an external backup to the original locations using a third-party tool or copy command.
  3. Perform an external restore of all storage spaces and logical logs using ontape -p -e followed by ontape -l (using ontape) or onbar -r -e (using ON-Bar).

Using external backups on RS secondary servers

You can perform an external backup of an RS secondary server. Performing a backup of an RS secondary server blocks that RS secondary server, but it will not block the primary server. You can perform a logical restore from the logs backed up from the primary instance. The backup obtained from the secondary server cannot be restored with level-1 or level-2 backups.

The external backup will not be complete if the database instance contains any of the following:

  • Non-logging smart large objects
  • Regular blobspaces
  • Non-logging databases
  • Raw tables

If an external backup is performed on an instance that contains any of the above items, the backup will be incomplete and cannot be used to restore the primary server.

Table 17 shows configuration parameters that are important to perform external backup on an RS secondary server.

Table 17. Configuration parameters for external backup on RS
STOP_APPLYStops the application of the logical log files on the RS secondary server. It might not be set on a RS secondary server before an external backup, because it will be set by the external backup itself.
LOG_STAGING_DIRDefines where logical logs from the primary will be stored while using STOP_APPLY.

After the archive checkpoint is processed, the RS secondary server stops applying logical logs, but it continues receiving logs from the primary server. The primary database server must be online or in quiescent mode during an external backup.

To perform the external backup on RSS, follow the same steps described for the external backup.

Logical log backup is possible only on the primary server.

Using the archecker

The archecker utility is available for backup verification of backups taken by ontape and ON-Bar. The archecker utility also has a table-level restore option, which is useful when portions of a database, a table, or a set of tables need to be recovered from level-0 backup and subsequent logical logs.

Archecker has its own configuration file that needs to be set before use. The default configuration file is $INFORMIXDIR/etc/ac_config.std or the Windows equivalent. You can also create your own configuration file. If the default AC_CONFIG file is not being used, set the AC_CONFIG environment variable to the full path name of the configuration file.

The ac_config,std file comes with three configuration parameters: AC_MSGPATH, AC_STORAGE, and AC_VERBOSE. There are, however, several other configuration parameters that can be set to configure different archecker options. Table 18 shows the configuration parameters related to archecker.

Table 18. Archecker configuration parameters
AC_MSGPATHSpecifies the location (full path name) of the archecker message file (ac_msg.log).
AC_STORAGESpecifies the location of the temporary files that archecker generates. You must specify the entire path of the storage location with plenty of free space. The default value is /tmp or the Windows equivalent. Use of /tmp should be avoided, because the system may become unusable if the /tmp filesystem is full. Archecker will use the current directory if AC_STORAGE is not set.
AC_VERBOSESpecifies either verbose or quiet mode for archecker messages. By default, this parameter is set to On.
AC_DEBUGPrints debugging messages in the archecker message log (AC_MSGPATH). By default, this parameter is set to Off. Enabling this parameter can cause the archecker message log file to grow very large and slow down the archecker process.
AC_IXBARSpecifies the pathname to the IXBAR file, while using archecker with an ON-Bar backup.
AC_LTAPEBLOCKSpecifies the tape block size in kilobytes for reading logical logs. This parameter must be set to the same value as the configuration parameter LTAPEBLK or BAR_XFER_BUF_SIZE multiplied by the database server base page size for ontape or ON-Bar backup, respectively.
AC_LTAPEDEVSpecifies the device name used by ontape for reading logical logs.
AC_SCHEMASpecifies the pathname to the archecker schema command file. You can override this parameter by using the -f command line option.
AC_TAPEBLOCKSpecifies the tape block size in kilobytes. It must be set to the same value as the configuration parameter TAPEBLK or BAR_XFER_BUF_SIZE multiplied by the database server base page size for ontape or ON-Bar backup, respectively
AC_TAPEDEVSpecifies the device name used by the ontape utility.
AC_TIMEOUTSpecifies the timeout value for the ON-Bar and archecker processes if one of them exits prematurely. This parameter prevents ON-Bar and archecker processes from waiting for each other indefinitely.

Starting in Informix Version 11, if the values of AC_LTAPEDEV, AC_TAPEDEV, AC_LTAPEBLOCK, and AC_TAPEBLOCK are not set, archecker uses the values for similar types of parameters from the current database server configuration file.

Verifying backups using archecker

You can invoke the archecker utility using either of two ways: from the command line, which is called stand-alone, or through the ON-Bar backup verification option -v, which is called integrated. You must use the stand-alone method to run an archecker backup verification of an ontape backup. To run a backup verification of an ON-Bar backup, using the onbar -v option is recommended.

In either case, as the utility executes, archecker messages are stored in the archecker message file. Listing 7 shows messages generated in the archecker message log during a backup validation.

Listing 7. Example of archecker message log (edited to reduce the line count)
Program Name: archecker
Version: 8.0
Released:       2011-05-11 22:37:01
CSDK:           IBM Informix CSDK Version 3.70
ESQL:           IBM Informix-ESQL Version 3.70.UC1
Compiled:       05/11/11 22:37 on SunOS 5.9 Generic_118575-28

AC_STORAGE    /home/informix/storage
AC_MSGPATH    /home/informix/ac_msg.log
AC_TAPEDEV    /TAPE/informix/backup/
AC_LTAPEDEV   /TAPE/informix/backup/

Archive file  /TAPE/informix/backup/prod_L0
Tape type: Archive Backup Tape
Informix version: IBM Informix Dynamic Server Version 11.70.UC2
Archive date: Sun May 4 23:29:41 2011
Archive Level: 0
Tape blocksize: 32768
Tape size: 2147483647
Tape number in series: 1
Control page checks PASSED
Reserve page validation PASSED
Checking rootdbs:TBLSpace
Checking sysmaster:sysdatabases
Checking sysmaster:systables
Checking sysmaster:syscolumns
Checking datadbs1:TBLSpace
Checking stores7:systables
Checking stores7:customer
Table checks PASSED
Tables/Fragments validated: 345
Archive Validation PASSED.

Listing 7 shows that the archecker utility performed checks on each table in the databases that are stored in different storage spaces. After each database in the storage spaces is checked, the data stream for the storage space is parsed to make sure that the spaces can be restored. After all the spaces have been checked, the utility responds with an overall pass or fail result.

Following is a list of some items that archecker validates:

  • Format of each page on the archive
  • Tape control pages
  • Each table to ensure all pages of the table exist on the archive
  • Reserved page format
  • Each chunk free list
  • Table extents (checked for overlap)

The following commands are examples of archecker commands for backup verification using ON-Bar:

  • archecker -bvs
  • onbar -v

The archecker -tvs command is the archecker command for backup verification using ontape.

When running successive verifications, be sure to empty the AC_STORAGE directory of files remaining from the previous verifications. You can also add the -d option to the archecker standalone commands to indicate that archecker should clean-up any files in AC_STORAGE before starting the new verification command, as follows:

  • archecker -bdvs
  • archecker -tdvs

The -v and -s options with the standalone archecker command offer verbos and print message on screen function, respectively.

If you use the ON-Bar validate option (onbar -v), the ON-Bar utility calls the archecker directly. In addition to the output in the archecker message log, additional information is recorded in the ON-Bar activity log.

Using the ON-Bar interface to the archecker utility provides additional validation options not available from the command line. You can validate to a specific moment in time by using the logical logs, and you can specify a subset of storage space to check. Listing 8 shows additional syntax available with the ON-Bar API.

Listing 8. Additional syntax available with ON-Bar API for validating a backup
onbar -v [ -t <time_stamp>] [-w] [ -f <filename> | <list_of_spaces>]

In Listing 8, the -w flag is used for whole system backups. You can also list the storage spaces to check or use a file with the names of storage spaces after the -f option.

Performing table-level restores with archecker

The archecker utility provides function to restore full or part of single or multiple tables from an archive. Tables can be restored up to a specific point in time. This means that you can restore specific pieces of data without having to perform a lengthy restore process of the entire archive. This is very useful, for example, to restore a table that has accidentally been dropped.

Archecker enables table-level restore without taking the database server or storage space offline. Depending on the schema command file you create, it can also enable you to restore tables to a specific point-in-time, filter the data (such as column_1 <= 250), and redirect the restore to a different storage space.

Using archecker, you can extract data from one database server's backup and send the data to another database server (possibly on a different operating system platform) anywhere on the network through a type of distributed transaction operation. Archecker is a flexible and powerful restore utility. You can restore tables using different table names, restore only the selected columns for a table, restore as an external table (restore data to a file), or dump data to an ASCII file.

Just like normal restore, table-level restore also has two parts: physical and logical. During physical restore, data are restored from a level-0 backup, and all the data that matches the schema command file conditions are extracted.

During the logical restore, after the level-0 backup has been processed, the logical log records created after the backup are parsed for transactions executed on the source table, which need to be applied to the table being restored. Archecker reads logical log records only from backed-up logical logs. Before starting the table-level restore, ensure that all the logical logs you want archecker to process have been backed up.

You can control whether the log records are only extracted, which is called staged), or if the records are extracted and applied, which is called applied. The default operation for table-level restore is to do the logical restore (the stage and the apply parts) in parallel. For the logical restore part of table-level restore, if the transactions in the log records match the schema command file conditions, the log records are staged in a temporary table where they are then replayed against the target tables. As one archecker process stages the log records, another archecker process applies the log records. In this way, you can bring the restored table back close to the desired point.

During the logical part of table-level restore, archecker does not apply any associated drop table log records. Archecker table-level restore stops restoring the table when it sees a drop table or a truncate table log record for the table being restored.

Generally archecker table-level restore puts the status messages in the archecker message file. Some status message are also printed to the screen when the -s option is used. Listing 9 shows example messages in the archecker message log during an execution of a table-level restore.

Listing 9. Example of archecker message log during table-level restore
IBM Informix Dynamic Server Version 11.70.FC2
Program Name:   archecker
Version:        8.0
Released:       2011-05-01 22:37:01
CSDK:           IBM Informix CSDK Version 3.70
ESQL:           IBM Informix-ESQL Version 3.70.UC1
Compiled:       05/11/08 22:37 on SunOS 5.9 Generic_118575-28

AC_STORAGE               /home/informix/storage
AC_MSGPATH               /home/informix/ac_msg.log
AC_VERBOSE               on
AC_TAPEBLOCK             32 KB
AC_IXBAR                 /home/informix/etc/ixbar.0
Dropping old log control tables
Extracting table stores7:customer into stores7:customer

Control page checks PASSED
Table checks PASSED
Table extraction commands 1
Tables found on archive 1
LOADED: stores7:customer produced 50 rows.
Creating log control tables
Staging Log 27
Logically recovered stores7:customer Inserted 0 Deleted 0 Updated 0

You need to execute archecker from a command line to perform the table-level restore. The physical and logical restore can be performed separately. The logical restore can also be run as separate staged and applied commands. The simplest way to run the archecker table-level restore command is to do the entire restore in a single command.

The following commands are examples of archecker commands for table-level restore using ON-Bar:

  • archecker -bvs -f <cmdfile>
  • archecker -bvs -f <cmdfile> -l phys, stage, apply

The following commands are examples of archecker commands for table-level restore using ontape:

  • archecker -tvs -f <cmdfile>
  • archecker -tvs -f <cmdfile> -l phys, stage, apply

When running successive archecker table-level restore commands, be sure to clean up files and tables left from previous restore attempts. You can use the archecker -DX command to clean up all working files and tables after an archecker table-level restore.

Listing 10 shows the syntax, including all the options, to use archecker for a table-level restore.

Listing 10. Table-level restore syntax
archecker [-b | -t] -X [-f cmdfile] [-v] [-s] [-l phys | stage | apply]

The -b or -t options indicate whether the backup media is from an ON-Bar or ontape operation, respectively. The -X option indicates this is a table-level restore, if you do not use the -f option. Use the -f flag if you want to override the AC_SCHEMA archecker configuration parameter. The -v and -s flags are for verbose mode and the output of the archecker message log information to the screen, respectively. You can also control how the table-level restore operation executes with the -l (lowercase L) flag. By default, the operation executes all three stages; phys, stage, and apply. However, you can specify just one or two comma-separated stages. For example, if you specify -l phys, stage, the data will be extracted and stored in the temporary staging tables but not applied. Option -d or -D can be used to delete previous archecker restore files and working tables.

If you are breaking up your table-level restore into separate stages, be careful not to use the -d or -D option in a follow-up command, because the archecker working files and tables from the first stage of the restore will be needed in the later stages of the restore.

Using the archecker schema command file

The schema command file is a major component of the archecker table-level restore operation. This file uses an SQL-like language to provide information that archecker uses during table-level restore. The full path to this file can be defined with the AC_SCHEMA archecker configuration parameter or passed as an argument with the -f flag when invoking archecker.

The structure of the schema command file is simple. It contains five major SQL statements:

  • Database
  • Create table
  • Insert into
  • Restore
  • Set

Additional SQL select and insert commands within the insert into statement are supported.

Complete the following steps to create the schema command file.

  1. Open a database using the DATABASE command.
  2. Define the exact schema of the source table from which data will be extracted using the CREATE TABLE command. If you want to restore the data to a different table, you need to define the other table as a target table. The schema of the target table does not need to match the source table. The schemas can be completely different in terms of number of columns, column name, and partitioning scheme. The data types of the columns to be loaded must be at least compatible, if not exact. Defining a target table is not mandatory.
  3. Specify the INSERT INTO statement to insert data into a target table at the same time you specify select data from the source table. You can use where clauses to filter data. Filters can be applied only to a physical restore. If the schema of the target table does not match the source table schema, choose selective columns while defining the select statement.
  4. Specify the execution of physical or logical restore with an additional option of point-in-time restore using the RESTORE statement. A restore statement is not mandatory. However, without it, archecker performs both physical and logical restore by default. If you want to do only a physical restore, use the restore to current with no log statement.
  5. Specify the transaction interval, or the number of rows to be processed before a commit work statement is executed, to prevent a long transaction from occurring using the SET command. You can specify a database storage space name for storing archecker working tables. By default, archecker uses root storage space. Any storage space other than temp can be used. Set is not a mandatory option.

Listing 11 shows an example of a point-in-time table-level restore, where data is selected from an archived table and restored to a different table name. Data is also filtered based on the col1 value.

Listing 11. Example of a point-in-time table-level restore
DATABASE stores7;

   col1     SERIAL,
   col2     CHAR(20),
   col3     SMALLINT
) IN dbspace1;

CREATE TABLE target    
   col1     INT,
   col2     CHAR(20)
) IN dbspace2;

INSERT INTO target SELECT col1, col2 FROM source
WHERE col1 > 5000;

RESTORE TO '2008-06-01 01:01:01';


Data is selected from a table named source and inserted into a table named target. The table schemas are not the same; therefore, columns are mapped in the INSERT statement. Data is also filtered based on the col1 value using the WHERE clause. Restore will occur up to the point-in-time 2008-06-01 01:01:01. The commit interval is set to 2000 records, and the storage space dbspace3 is used to store the archecker table-level restore working tables.

Using dbexport and dbimport

Dbexport and dbimport are two simple utilities, which run without any prior configuration and are easy to use. Although these utilities are for migrating data between Informix database servers, you can use them to back up and restore small databases. Both dbexport and dbimport utilities are platform independent. You can use dbexport to export an Informix Version 7.31 database on a Windows platform, and then you can use dbimport to import the same database to Informix version 11.70 on a Solaris platform.


The dbexport utility unloads data in text format from every table in a database and creates a schema file for the database. It also creates a message file called dbexport.out in the current directory that contains warnings and error messages. You can unload the data to a file or tape. During export, the database is locked in exclusive mode for referential integrity. If the database fails to obtain the exclusive lock, the dbexport utility aborts with an error.

The dbexport utility generates individual unload files (.unl) for each table in the database. When unloading data to a disk, dbexport creates a subdirectory called database.exp. The utility unloads the data and writes a schema file into the directory. If you do not specify the unload directory path, dbexport creates a database.exp directory under the current directory.

Listing 12 shows the syntax, including all the options, of the dbexport command. Arguments to the dbexport command are order independent.

Listing 12. dbexport command syntax
 dbexport <database> [-X] [-c] [-q] [-d] [-ss [-si]]
   [{ -o <directory> | -t <tapedevice> -b <blocksize> -s <tapesize> 
   [-f <schema>] }]

<database>      Specifies the name of the database you are exporting

-X	        Recognizes HEX binary data in character fields

-c	        Ignores non-fatal errors and continues

-q	        Suppresses error messages, warnings etc. from display to screen

-d	        Exports simple-large-object descriptors only, not simple-large-object data

-ss	        Generates database server-specific information for all tables in the
                specified database

-si	        Excludes the generation of index storage clauses for non-fragmented tables

-o <directory>	Specifies the directory on disk in which dbexport creates the 
		directory. This holds the data files and the schema file that dbexport 
		creates for the database.

-t <tapedevice>	Specifies the pathname of the tape device for export data and schema file

-b <blocksize>	Specifies the block size of the tape device in KB

-s <tapesize>	Specifies the capacity of tape (in KB)

-f <schema>	Specifies the pathname where you want the schema file stored, while 
                exporting data on tape

Listing 13 shows a simple example of dbexport.

Listing 13. Dbexport command example
dbexport -o /export/data <database> -ss

In Listing 13, <database> is the name of database. The -o flag specifies the unload directory. The -ss flag generates server-specific configurations of the database, such as extent size, lock mode, table fragmentation, and storage space name where the table resides.

You can create files larger than 2 GB using dbexport. However, make sure that the operating system supports files larger than 2 GB.

Keep a storage space layout (dbschema -c) and list of environment settings to prevent any unexpected results after the data have been imported.


The dbimport utility creates the database and loads data based on the schema file and unload data generated by dbexport. The user who runs dbimport is granted the DBA privilege on the newly created database. During import, dbimport exclusively locks each table and unlocks the table once the import completes. You can import data from files on disk or tape. You can also have the schema file located on disk and the data files on tape. The dbimport creates a message file called dbimport.out in the current directory that contains warning and error messages. While importing data, the dbimport utility expects that the database schema and the unload data be available in the database.exp directory under the current directory, unless you specify the location with a dbimport command option.

Listing 14 shows the syntax of the dbimport command, including the available options. Arguments to dbimport command are order independent.

Listing 14. dbimport command syntax
dbimport <database> [-X] [-c] [-q] [-d <storage space>]
   [-l [{ buffered | <log-file> }] [-ansi]]
   [{ -i <directory> | -t <tapedevice> [ -b <blocksize> -s <tapesize> ] 
   [-f <schema>] }]

<database>	    Specifies the name of the database to create

-X	            Recognizes HEX binary data in character fields

-c	            Ignores non-fatal errors and continues

-q	            Suppresses error messages, warnings etc. from display to screen

-d <storage space>  Specifies the storage space where database will be
                    created. The default storage space is the rootdbs.

-l	            Represents the logging mode of database. If no option is used,
                    the database is created as non-buffered. The -l buffered option
                    creates a buffered logging database. 

<log-file>	    Specifies the name of the transaction-log file. Used with Informix 
		    Standard Engine only.

-ansi	            Specifies that the database created is ANSI compliant

-i <directory>	    Specifies location of database.exp directory on disk. This holds the 
                    data files and the schema file that dbimport uses for create 
                    database and load data.

-t <tapedevice>	    Specifies the pathname of the tape device for export data and 

-b <blocksize>	    Specifies the block size of the tape device in KB

-s <tapesize>	    Specifies the capacity of tape (in KB)

-f <schema>	    Specifies the location of schema file, while importing data from tape

Listing 15 shows a simple example of dbimport.

Listing 15. dbimport command example
dbexport -i /export/data <database>

In Listing 15, <database> is the name of database. The -i flag specifies the location of database.exp directory on the disk that holds the data and schema file.

Before you start importing data, make sure you have set all the environment variables and checked that the storage space layout is the same as on the database server from which the data was exported.

It is better not to edit the unload data or the schema file. However, if you delete any records from unloaded data, the unload comment needs to be updated in the schema file also.

Using cloud computing for ontape backup and restore

You do not need to configure any tape device for backup to cloud. Instead store backed up data on an online storage service over the Internet.

You can use the ontape utility to back up and restore Informix database data to or from cloud storage. Storing data on the cloud provides scalable storage that can be accessed from the web. There are several cloud computing platforms available. Some of the more mature cloud platforms are Amazon Simple Storage Services (S3) and IBM SmartCloud. In this section, you will learn how to configure and use the ontape utility to back up and restore data to or from the Amazon S3 cloud.

Before you consider Amazon S3 as your backup and restore solution with the ontape utility, you must have an Amazon account to perform cloud storage backups. In order to use the ontape backup to cloud, you need Java Version 1.5 or later installed on your computer.

Configuring for backup to the cloud

Complete the following steps to back up data to the Amazon S3 cloud and restore from the data using the ontape utility.

  1. Using a web browser, log into your Amazon S3 account and look up your security credentials (AccessKey and SecretKey). You will find the AccessKey under Account > Security Credentials > Access Credentials. The SecretKey will be shown only after clicking the View button.
  2. Store the access credentials in a file. Set the permissions on the file to allow access only to the user running the ontape utility. You can store the values in a file in $INFORMIXDIR/etc/ifxbkpcloud.credentials (%INFORMIXDIR%\etc\ifxbkpcloud.credentials on a Windows platform). Listing 16 shows an example of the ifxbkpcloud.credentials file format.
    Listing 16. Access credentials
  3. Use the ifxbkpcloud.jar utility to create a storage device in the region where you intend to store backup data. Amazon uses the term bucket to describe the container for backup data. The storage device name you choose has the same restrictions as those for the bucket name in Amazon S3 and must be unique. Use the command from in Listing 17 to create a storage device named mytapedev at a U.S. Eastern region on Amazon S3. Run the command from the $INFORMIXDIR/bin directory on UNIX systems, or from %INFORMIXDIR%\bin on Windows systems.
    Listing 17. Storage device creation
    java -jar ifxbkpcloud.jar CREATE_DEVICE amazon mytapedev US_East
  4. Set the TAPEDEV and LTAPEDEV Informix configuration parameters to use cloud storage with ontape backup. Use these parameters to specify a comma-separated list of items that configure cloud storage with ontape. Table 19 describes the items needed to specify with TAPEDEV and LTAPEDEV configuration parameters.
Table 19. Configuration parameters for backup and restore to cloud
1Specifies the local directory where the temporary backup file will be stored for backup as well as restore operations.
2Specifies whether you want to delete or keep the temporary file after it has been successfully transfer to Amazon S3.
3Specifies the name of the cloud storage provider
4Specifies the URL of the Amazon S3 bucket. Use https to secure data transmission.

Listing 18 shows an example of TAPEDEV and LTAPEDEV configuration parameters in the onconfig file to point to the cloud storage location.

Listing 18. Example of TAPEDEV and LTAPEDEV configuration
TAPEDEV   '/informix/tapedev_dir, keep = no, cloud = amazon, 
           url ='

LTAPEDEV   '/informix/ltapedev_dir, keep = no, cloud = amazon, 
           url ='

You can use any ontape command against the cloud storage for backup and restore. Data should be encrypted before transferring data to a cloud image. To encrypt data, use the BACKUP_FILTER and RESTORE_FILTER configuration. The archecker utility does not support table-level restore of data from cloud storage.

There is a limitation on the size of ontape backup objects stored in S3. This limitation is currently 5 GB per single object. The 5 GB limit is an old limitation from Amazon S3 that was lifted last year, and the new limitation is 5 TB for objects stored in S3. The limitation will also be increased to the new 5 TB size for ontape in an upcoming Informix release.

Using Informix startup and fast recovery

Fast recovery is an automatic, fault-tolerant process that Informix server executes every time it moves from offline to quiescent, administration, or online mode. You are not required to take any administrative actions for fast recovery; it is an automatic feature.

As part of the shared-memory initialization, the database server checks the contents of the physical log. The physical log is empty when the database server shuts down in a controlled manner. Informix performs a checkpoint when it moves from online mode to quiescent mode, which flushes the physical log. An uncontrolled shut down can leave pages in the physical log, which begins fast recovery during startup.

If a database uses buffered login, some logical-log records associated with committed transactions might not be written to the logical log at the time of the failure. If this occurs, fast recovery cannot restore those transactions. Fast recovery can restore only transactions with an associated COMMIT record stored in the logical log on disk.

During fast recovery, the physical log can overflow. If this occurs, the database server tries to extend the physical log space to a disk file named plog_extend.servernum. The default location of this file is $INFORMIXDIR/tmp. Use the ONCONFIG parameter PLOG_OVERFLOW_PATH to define the location for creating this file.

For databases or tables that do not use logging, fast recovery restores the database to its state at the time of the most recent checkpoint. All changes made to the database since the last checkpoint are lost.

Fast recovery follows this process:

  1. The database server uses the data in the physical log to return all disk pages to their condition at the time of the most recent checkpoint. This point is known as physical consistency.
  2. The database server locates the most recent checkpoint record in the logical-log files.
  3. The database server rolls forward all logical-log records written after the most recent checkpoint record.
  4. The database server rolls back all uncommitted transactions. Some XA transactions might be unresolved until the XA resource manager is available.

Understanding physical logging

Physical logging is the process of storing the before-image of pages that the database server is going to change, and then actually record changes on the disk. Before the database server modifies certain pages in the shared-memory buffer pool, it stores before-images of the pages in the physical-log buffer in shared memory.

The database server maintains the before-image page in the physical-log buffer in shared memory for those pages until one or more page cleaners flush the pages to disk. The unmodified pages are available in case the database server fails or the backup procedure requires them to provide an accurate snapshot of the database server data. Fast recovery and database server backups use these snapshots.

The database server recycles the physical log at each checkpoint, with exceptions.

Using fast recovery and physically logged pages

After a failure, the database server uses the before-images of pages to restore these pages on the disk to their state at the last checkpoint. Then the database server uses the logical-log records to return all data to physical and logical consistency, up to the point of the most-recently completed transaction. If multiple modifications were made to a page between checkpoints, typically only the first before-image is logged in the physical log.

There are several configuration parameters that can influence the recovery process. The Table 20 shows the configuration parameters used in the recovery process.

Table 20. Configuration parameters for recovery
Parameter Description
PLOG_OVERFLOW_PATHSpecifies the location of a disk file (named plog_extend.servernum) that the database server uses if the physical log file overflows during fast recovery. The database server removes the plog_extend.servernum file when the first checkpoint is performed during a fast recovery.
OFF_RECVRY_THREADSSpecifies the number of recovery threads used in logical recovery when the database server is offline (during a cold restore). This number of threads is also used to roll forward logical-log records in fast recovery.
ON_RECVRY_THREADSSpecifies the maximum number of recovery threads that the database server uses for logical recovery when the database server is online (during a warm restore).
RTO_SERVER_RESTARTEnables you to use recovery time objective (RTO) standards to set the amount of time, in seconds, that Informix has to recover from a problem after you restart Informix and bring it into online or quiescent mode.

Understanding return codes and the oninit utility

The database server returns an error message and a return code value, if it encounters any error during startup. The return codes provide detailed information about the reason for a failure. Table 21 shows some common return codes and possible actions.

Table 21. Return codes generated from the oninit utility
Return codeMessage textUser action
0The database server was initialized successfully.
1The server initialization failed. Look at any error messages written to stderr or the online message log.Take the appropriate action based on the error messages written to stderr or the online message log.
87The database server has detected security violations, or certain prerequisites are missing or incorrect.(UNIX and Mac OS only) Check whether the user and group informix exists. Check whether the server configuration file (onconfig) and sqlhosts file exist and have the correct permissions. Check whether the environment variables INFORMIXDIR, ONCONFIG, and SQLHOSTS have valid values and that their lengths do not exceed 255 characters. Check whether the environment variable INFORMIXDIR specifies an absolute path and does not have any spaces, tab, new lines, or other incorrect characters. Check whether role separation-related subdirectories under the $INFORMIXDIR directory, such as aaodir and dbssodir, have the correct ownership. Run the onsecurity utility to diagnose and fix any issues.
170The database server failed to initialize the dataskip structure.Free some physical memory on the system, and start the database server again.
250The database server failed to dynamically load the ELF library.The ELF library is not available to the database server. Install the libelf packages.
255There was an internal error during server initialization. Look at any error messages written to stderr or to the online message log.Take the appropriate action based on the error message.

Rolling back SQL transactions to a savepoint

Transactions on Informix Server are atomic, which means that either all of the operations within a transaction are committed or none are committed. For example, if a transaction receives a constraint violation or a space-full error during the span of the transaction, the application would have to cancel the entire transaction and then retry or send an error.

The savepoint feature enables applications to roll back a transaction to a predetermined marker. Savepoints are named markers within a database transaction. In the case of an error, the transaction logic can specify that the transaction roll back to a savepoint. The Informix implementation of savepoints follows the SQL-99 standard.

The following SQL statements are implemented for savepoints:

  • The SAVEPOINT statement creates a savepoint within the current SQL transaction.
  • The ROLLBACK WORK TO SAVEPOINT statement discards changes to the schema of the database or to its data values by statements that follow the savepoint, but the effects of DDL and DML statements that preceded the savepoint persist.
  • The RELEASE SAVEPOINT statement destroys a specified savepoint, as well as any other savepoints that exist between the RELEASE statement and the savepoint that it references.


This tutorial focuses on IBM Informix database backup and restore concepts. It tells you what you need to know about backup and restore before taking the Informix system administration certification exam. You should now have a better understanding of Informix backup and restore operations, including the following concepts and features:

  • How to establish a recovery strategy and backup schedule (including whole-system backups, storage backups, serial backups, parallel backups, logical log backups, cold restores, warm restores, mixed restores, physical restores, logical log restores, and point-in-time restores)
  • Informix backup and restore utilities and their capabilities, including ON-Bar, ontape, archecker, dbexport, and dbimport
  • The configuration parameters and files for the backup and restore utilities and storage devices options and requirements
  • Ways to optimize and monitor performance for ON-Bar backups and restores, in addition to the debugging logs needed to diagnose an archiving problem
  • The commands needed to perform backup, restore, and archive verifications
  • The Informix startup and fast recovery process

Downloadable resources

Related topics


Sign in or register to add and subscribe to comments.

Zone=Information Management
ArticleTitle=System Administration Certification exam 919 for Informix 11.70 prep, Part 5: Informix backup and restore