Informix Dynamic Server 11.50 Fundamentals Exam 555 certification preparation, Part 9: Informix replication technologies

Data replication and high availability

This is the last tutorial in a series of nine tutorials to help you prepare for the IBM® Informix® Dynamic Server (IDS) 11.50 Fundamentals certification exam 555. This tutorial discusses replication technologies and provides an overview of high availability technologies available in IDS. Learn the difference between High Availability Data Replication and Enterprise Replication, and follow the steps for how to set up an IDS server for replication and high availability.

Share:

Rashmi Chawak (chawak@us.ibm.com), Advanced Support Engineer, IBM

Rashmi Chawak photoRashmi Chawak is an Advanced Support Engineer for IDS at IBM. She has been in this position supporting IDS for over seven years and has worked on porting IDS product on various platforms. Her current area of focus is replication strategies.



David B. Kolbinger (dkolbin@us.ibm.com), Advanced Support Engineer, IBM

David Kolbinger is an Advanced Support Engineer for IDS at IBM. He has worked with IBM Informix Dynamic Server since 1996. His current area of focus is on replication strategies.



17 September 2009

Before you start

This tutorial gives an overview of IDS replication technologies.

About this series

This complimentary series of nine tutorials has been developed to help you prepare for the IBM Informix Dynamic Server 11.50 Fundamentals certification exam (555). This certification exam will test your knowledge of entry-level administration of IDS 11.50, including basic SQL (Structured Query Language), how to install IDS 11.50, how to create databases and database objects, security, transaction isolation, backup and recovery procedures, and data replication technologies and purposes. These tutorials provide a solid base for each section of the exam. However, you should not rely on these tutorials as your only preparation for the exam.

About this tutorial

This tutorials discusses different replication and high availability technologies offered by IDS 11.50. It explains how to configure High Availability Data Replication (HDR), Enterprise Replication (ER), Remote Standalone secondary (RSS) servers, Shared Disk secondary (SDS) servers, and continuous log restore.

Objectives

This tutorial is designed to help you become familiar with:

  • Different replication technologies offered by IDS
  • The difference between the various replication technologies
  • Different replication terminologies
  • How to setup up HDR, ER, RSS, SDS, and continuous log restore

Prerequisites

This tutorial is written for IDS database professionals whose skills and experience are at a beginning to intermediate level. You should be comfortable in setting up the sqlhosts file, setting configuration parameters, and starting IDS. You should be familiar with different IDS utilities like onmode, onstat, ontape, ON-bar, and so on.

System requirements

To run the examples in this tutorial, you need a UNIX computer with IDS installed and to be able to start two IDS instances.


High Availability Data Replication: Introduction

In today's 24X7 online business and enterprise world, high availability of data has become a necessity. A company could lose thousands, if not millions, of dollars when a server is offline. Informix Dynamic Server has a full suite of technologies to provide uninterrupted and continuous services to minimize downtime and maintenance.

Businesses and enterprises can use replication for:

  • Capacity relief: You can propagate OLTP data to a secondary site, so that users can be directed to the secondary site for reporting purposes. This can provide more capacity to the OLTP-related users on the primary site.
  • High availability: Data is updated at the primary site and replicated to secondary site. In case of failure, the secondary site becomes the primary site.
  • Data consolidation: You can consolidate remote data into a central server. For example, you could consolidate data from branch locations.
  • Distributed availability: You can distribute information from a central server. For example, you could distribute data from headquarters to branch locations.
  • Update anywhere: This gives you a consistent set of data that can be updated at any site in a peer-to-peer fashion.

What is HDR?

HDR provides a mechanism to create and maintain a copy of logged database on one server (primary server) onto a second server (secondary server). In an HDR setup, there is a single primary and a single secondary server. If the primary server fails, an application can connect to the secondary server and continue operations. The goal of high HDR is to minimize or eliminate the impact of a physical server failure. The secondary in an HDR pair is available in read-only mode. Reporting applications and tools can be run against this read-only secondary server, which can help reduce the load on the primary server.

How HDR works

When Data Manipulation Language (DML) statements are executed against the logged database on the primary, logical log records are sent to the secondary server. The secondary server applies the logical log records to keep the status of the database up to date with updates on the primary server. Updates to the secondary can be synchronous or asynchronous. A checkpoint between the primary and secondary database servers is synchronous; that is, a checkpoint on the primary database server completes only after it completes on the secondary database server.

Prerequisites for HDR

The following prerequisites must be met for HDR:

  • Identical OS and hardware for both the primary and the secondary servers. It is not possible to set up HDR between two different operating systems.
  • Disk layout must be the same for chunks added to each server. Devices hosting database chunks must be created and available on the secondary with the same value for the PATH as the primary server. This can be achieved using symbolic links.
  • The version of IDS on both the HDR primary and secondary servers must be identical.
  • The database must be logged.
  • If blob data types are used, they must be stored in dbspaces. Blob data types stored in blobspaces are not replicated.
  • If the root chunk is mirrored on the primary server, it must also be mirrored on the secondary server.
  • HDR works with a TCP/IP connection. The database server name (the value of the DBSERVERANME configuration parameters) must be set to a TCP/IP connection in the sqlhosts file.
  • Both the primary and secondary server machines must be trusted. Modify the .rhosts or /etc/hosts.equiv for user informix to establish trusted communications.

Configuration parameters that affect HDR

The following configuration parameters affect HDR and its performance:

  • DRAUTO: The DRAUTO configuration parameter determines what action the secondary server will take in case the primary server fails. This parameter must be set identically on both the primary and secondary servers. Use this parameter cautiously. If there is a temporary network failure, each server could perceive the other server to be down. In this situation, if DRAUTO is set to 1, the secondary server would convert to a standard server, and the primary server might stop replication. Clients might try to update data on both the servers independently. This can cause the servers to get out of sync. Depending on the DRAUTO setting, the secondary will perform one of the following actions:
    • If DRAUTO is set to 0, the secondary will remain read only until it is manually switched to a primary or standard mode.
    • If DRAUTO is set to 1(RETAIN_TYPE), the secondary server automatically switches to a standard server when the primary server fails. The server switches back to a secondary server when the HDR pair is restarted.
    • If DRAUTO is set to 2(REVERSE_TYPE), the secondary server automatically switches to a primary server when the primary server fails. The server switches to primary server (and the original primary server switches to a secondary server) when the HDR pair is restarted.
  • DRINTERVAL: DRINTERVAL specifies the maximum number of seconds between HDR data buffer flushes. This parameter must be set identically on both the primary and secondary servers.

    HDR has two main modes of operation: synchronous or asynchronous. Let's take a look at how updates are propagated from the primary server to the secondary server.

    When the primary server starts to flush the contents of the logical-log buffer in shared memory to the logical log on disk, it also copies the contents of the logical-log buffer to a data-replication buffer. The data replication buffers are part of the virtual shared memory that the primary server manages. The data replication buffers are the same size as the logical-log buffers. The primary server then sends contents of the data replication buffer the HDR secondary server, either synchronously or asynchronously. The value of the DRINTERVAL configuration parameter determines whether the database server uses synchronous or asynchronous updating.

    • If DRINTERVAL is set to -1, updates are synchronous.
    • If DRINTERVAL is set to anything other than -1, updates are asynchronous.

    HDR synchronous updating: Again, when DRINTERVAL is set to -1, data replication to the HDR secondary server occurs synchronously. As soon as the primary server writes the logical-log buffer contents to the HDR buffer, it sends those records from the buffer to the HDR secondary server. The logical-log buffer flush on the primary server completes only after the primary server receives acknowledgment from the HDR secondary server that the records were received.

    HDR asynchronous updating: Again, when DRINTERVAL is set to any value other than -1, data replication occurs asynchronously to the HDR secondary server. The primary server flushes the logical-log buffer after it copies the logical-log buffer contents to the HDR buffer. Independent of that action, the primary server sends the contents of the HDR buffer across the network when one of the following conditions occurs: the HDR buffer becomes full, or the time interval specified by DRINTERVAL on the primary server has elapsed since the last time that the HDR replication buffers have been flushed.

  • DRTIMEOUT: DRTIMEOUT specifies time interval (in seconds) for which server in the HDR pair waits for transfer acknowledgment from each other. If the checkpoint does not complete within the time that the DRTIMEOUT configuration parameter specifies, the primary server assumes that a failure has occurred. This parameter must be set identically on both the primary and secondary servers.
  • DRLOSTFOUND: The DRLOSTFOUND configuration parameter specifies the path name to the dr.lostfound.timestamp file. If the primary server does not receive acknowledgment from the secondary server within the time specified by the DRTIMEOUT configuration parameter, it adds the transaction information to a file named by the DRLOSTFOUND configuration parameter.
  • ENCRYPT_HDR: ENCRYPT_HDR specifies if HDR encryption is enabled or disabled.
    • 1 = enabled; provides secure method of transferring from one server to other
    • 0 = disabled

    With added security comes added cost. Extra CPU cycles are used to encrypt and decrypt HDR data.

  • DRIDXAUTO: DRIDXAUTO specifies whether HDR server automatically starts index replication if index corruption is detected by the secondary server.
    • 1 = on; automatically replicate the index
    • 0 = off; manually replicate the index
  • LOG_INDEX_BUILDS: LOG_INDEX_BUILDS specifies if index page logging is enabled or disabled.
    • 1 : Index page logging is enabled. Index pages are copied to the logical log. The primary server sends the index to the secondary server through the logs.
    • 0 : Index page logging is disabled. When an index is created on the primary server, that index is transferred page by page to the secondary.

How to set up and manage HDR

Starting HDR for the first time

Before proceeding with the steps to start HDR, enable trusted communication between the primary and secondary server machines for user informix. Update the $INFORMIXSQLHOSTS and /etc/services files for network connections. Make sure that the onconfig files on both the primary and secondary servers are configured correctly. The following configuration parameters must be set to the same values on both the servers.

  • ROOTNAME
  • ROOTOFFSET
  • ROOTPATH
  • ROOTSIZE
  • MIRROROFFSET - if mirroring is in use
  • MIRRORPATH - if mirroring is in use
  • PHYSDBS
  • PHYSFILE
  • LOGFILES
  • LOGSIZE DYNAMIC_LOGS
  • DRAUTO
  • DRINTERVAL
  • DRTIMEOUT
  • DRLOSTFOUND
  • LOG_INDEX_BUILDS: optional
Table 1. Steps to set up HDR for the first time
StepOn the primaryOn the secondary
1Install UDRs, UDTs, and DataBlade modules. Register UDRs, UDTs, and DataBlade modules.Install UDRs, UDTs, and DataBlade modules.
2ontape -s -L 0, onbar -b -L 0, or perform external backup-
3onmode -d primary sec_name-
4-ontape -p, ontape -r -p -e, onbar -r, or onbar -r -p -e
5-onmode -d secondary prim_name
6-ontape -l or onbar -r -l

The following steps provide detailed descriptions of the steps outlined in Table 1 above:

  1. Install user-defined types, user-defined routines, and DataBlade modules on both servers. Register them on the primary server only.
  2. Take a level 0 backup of the primary server.
  3. Run the following command to set the server as the primary server:
    onmode -d primary sec_name

    In the above command, replace sec_name with the database server name (the value of the DBSERVERNAME configuration parameter) of the secondary server. After executing the command, the following messages will be printed in the online.log:

    DR: new type = primary server name = sec_name
    DR: Trying to connect to secondary server
    DR: Cannot connect to secondary server
  4. On the secondary server, perform a physical restore from the level-0 backup created in Step 2 using the same utility that was used for taking the backup. Do not perform a logical restore.
    • ON-Bar: Use the onbar -r -p command to perform a physical restore.
    • ON-Bar and performing an external restore: Use the onbar -r -p -e command to perform the physical restore.
    • ontape: use the ontape -p option. You cannot use the ontape -r option because it performs both a physical and a logical restore.
    • ontape and performing an external restore: use the ontape -p -e command to perform the physical restore.
  5. On the secondary server, run the following command to set the server as a secondary server:
    onmode -d secondary prim_name

    In the above command, replace prim_name with the database server name (the value of the DBSERVERNAME configuration parameter) of the primary server. On the secondary server, logical recovery is done if all the logical logs are still available on the disk; otherwise Step 6 needs to be performed. After executing the command, the following messages will be printed in the online.log of the secondary server:

    DR: new type = secondary server name = prim_name
  6. If logical-log records that were written to the primary server are no longer on the primary server disk, the secondary server prompts you to recover these files from tape backups. After you recover all the logical-log files on tape, the logical restore completes, using the logical-log files on the primary server disk.

    When HDR setup is completed successfully, the following message will be printed in the online.log of primary server:

    DR: Primary server connected DR: Primary server operational

    The following message will be printed in the online.log of secondary server:

    Secondary server operational

Changing server types

You can change the type of the secondary server to either the primary server or a standard server. You can only change from a secondary server to standard server (using the onmode -d standard command) if HDR is turned off on the secondary server. HDR is turned off when the replication connection to the primary server drops or if replication fails on the secondary server. When you restart a standard server, it does not attempt to connect to the other server in the replication pair. With HDR, changing the modes of a server could cause changes to the other server in the pair. This section discusses what happens when one of the servers in the HDR pair is brought down. On the primary, running the onmode -k command has the following effects:

  • The secondary prints a message in the message log:
    DR: Receive error. HDR is turned off.
  • On the secondary, the effect depends upon the setting of the DRAUTO configuration parameter:
    • If DRAUTO = 0, the secondary stays in read-only mode.
    • If DRAUTO = 1, the secondary server switches to standard mode and is available to accept updates.
    • If DRAUTO = 2, the secondary server switches to primary mode as soon as the connection to the old primary is lost.

On the secondary, running the onmode -k command will cause the primary server to print the following in the message log:

DR: Turned off on primary server

Enterprise Replication: Introduction

Enterprise Replication (ER) is an asynchronous log-based replication method. Transactions for replication are captured from the logical logs on the source instances after they are committed and sent to the target instance, where they are applied as normal logged transactions. This type of replication has very minimal impact on the source instance. Since the information is read from the logical logs, it does not impact transaction processing. Because replication is asynchronous, the source instance is not waiting for transactions to be applied on the targets before continuing processing.

Enterprise Replication's flexible architecture supports multiple replication methodologies as well as network topologies:

  • Replication methodologies:
    • Primary-target - Database changes originate at the primary and are replicated to target instances, but changes on target instances are not replicated to the primary instance
    • Update-anywhere - Database changes are applied to all instances participating in replication, regardless of their originating server
  • Network topologies:
    • Fully connected - Continuous connectivity between all participating database servers
    • Hierarchical tree - A parent-child configuration that supports continuous and intermittent connectivity
    • Forest of trees - Multiple hierarchical trees that connect at the root database servers

Adding to its flexibility, Enterprise Replication can be used in conjunction with HDR, SDS, and RSS replication methods. It can also be used across platforms and different versions of IBM Informix Dynamic Server. Enterprise Replication does not require identical storage to be defined between the instances or even identical table schemas or names.

How Enterprise Replication works

The three phases of Enterprise Replication are listed below, followed by an example with a more detailed description of the phases:

  • Data capture
  • Data transport
  • Applying replicated data

Let's take a look at simple example of replicating a single transaction from a source to a target:

  1. A client application performs a transaction in a database that is defined as a replicate.
  2. The transaction is put into the logical log.
  3. The log capture component, also known as the snoopy component, reads the logical log and passes the log records onto the grouper component.
  4. The grouper component evaluates the log records for replication and groups them into a message that describes the operations that were in the original transaction.
  5. The grouper component places the message in the send queue. Under certain situations, the send queue spools messages to disk for temporary storage.
  6. The send queue transports the replication message across the Enterprise Replication network to the target server.
  7. The replication message is placed in the receive queue at the target server.
  8. The data sync component applies the transaction in the target database. If necessary, the data sync component performs conflict resolution.
  9. An acknowledgment that the message was successfully applied is placed in the acknowledgment queue.
  10. The acknowledgment message is sent back to the source server.

Starting Enterprise Replication for the first time

Configuration parameters that affect Enterprise Replication:

  • CDR_EVALTHREADS - The number of evaluator threads per CPU VP and the number of additional threads, separated by a comma (required)
  • CDR_DSLOCKWAIT - The number of seconds the datasync component waits for database locks (required)
  • CDR_QUEUEMEM - The maximum amount of memory, in KB, for the send and receive queues (required)
  • CDR_NIFCOMPRESS - Controls the network interface compression level (required)
  • CDR_SERIAL - Specifies the incremental size and the starting value of replicated serial columns (required)
  • CDR_DBSPACE - The dbspace name for the syscdr database (optional)
  • CDR_QHDR_DBSPACE - The name of the transaction record dbspace; default is the root dbspace (optional)
  • CDR_QDATA_SBSPACE - The names of sbspaces for spooled transaction data, separated by commas (required)
  • CDR_MAX_DYNAMIC_LOGS - The maximum number of dynamic log requests that ER can make within one server session (required)
  • CDR_SUPPRESS_ATSRISWARN - The datasync error and warning code numbers to be suppressed in ATS and RIS files (optional)

Let's take a look at an example of defining replication between two instances with the following characteristics. This example shows update anywhere replication. In update anywhere replication, changes made on any database server are replicated to all the other participating database servers.

  • CDR_QDATA_SBSPACE named qdatasbspace in a cooked file named /u/data/qdatasbspace with an offset of 0 and a size of 200MB
  • Source group name grp_er1, target group name grp_er2
  • Source DBSERVERNAME er1, target DBSERVERNAME er2
  • Source database name primary_db, target database name target_db
  • Source table name primary_table, target table name target_table
  • Source ATS directory name /u/data/atsdir, target ATS directory name /u/data/atsdir
  • Primary RIS directory name /u/data/risdir, target RIS directory name /u/data/risdir
  • Replicate name repl1

This example uses a conflict resolution of "timestamp". The available conflict resolution methods are:

  • always - Enterprise Replication does not resolve conflicts, but replicated changes will be applied even if the operations are not the same on the source and target servers. Use only in primary target replication.
  • ignore - Enterprise Replication does not resolve conflicts.
  • timestamp - The row or transaction with the most recent time stamp take precedence in a conflict.
  • deletewins - The row or transaction with a DELETE operation or, otherwise, with the most recent time stamp, takes precedence in a conflict. The deletewins conflict resolution rule prevents upserts.
Table 2. Steps to set up ER for the first time
StepOn the sourceOn the target
1 Use onspaces to create the sbspace for spooled transaction data, specified by CDR_QDATA_SBSPACE:
  • onspaces -c -S qdatasbspace -p /u/data/qdatasbspace -o 0 -s 200000
Use onspaces to define the sbspace for CDR_QDATA_SBSPACE:
  • onspaces -c -S qdatasbspace -p /u/data/qdatasbspace -o 0 -s 200000
2 Modify the onconfig file to set CDR_QDATA_SBSPACE:
  • CDR_QDATA_SBSPACE qdatasbspace
Modify onconfig file with the name of the CDR_QDATA_SBSPACE:
  • CDR_QDATA_SBSPACE qdatasbspace
3 Configure the sqlhosts file to contain connections for both the source and the target:
  • grp_er1 group - - i=12
  • er1 onsoctcp primary 9211 g=grp_er1
  • grp_er2 group - - i=13
  • er2 onsoctcp stewie 9212 g=grp_er2
Configure sqlhosts file to contain connections for both the source and the target:
  • grp_er1 group - - i=12
  • er1 onsoctcp primary 9211 g=grp_er1
  • grp_er2 group - - i=13
  • er2 onsoctcp stewie 9212 g=grp_er2
4 Define both the source and target servers for replication:
  • cdr define server -c grp_er1 -A /u/data/atsdir -R /u/data/risdir -I grp_er1
  • cdr define server -c grp_er2 -A /u/data/risdir -R /u/data/risdir -I -S grp_er1 grp_er2
-
5 Define a replicate:
  • cdr define replicate -C timestamp -S tran -A -R repl1 \
  • "primary_db@er1.primary_table" "select * from primary_table" \
  • "target_db@er2.target_table" "select * from target_table"
  • Note: This will replicate all rows from primary_table to target_table. Any valid select statement could be used here to define the data being replicated.
-
6 Start the replicate:
  • cdr start replicate repl1
-

This is a very simple example of setting up replication. Read the documentation for further details on command line syntax and options.


Shared Disk secondary: Introduction

In Shared Disk (SD) secondary replication, the primary and SD secondary servers share disk space through a high-availability cluster configuration. In this configuration, no physical copy of the database is stored on the SD secondary server. If the primary and SD secondary severs exist on the same machine, they can both access the local disk. If they reside on different physical machines, they must be configured to use shared disk devices. Do not configure the primary and SD secondary servers to use operating system buffering such as an NFS mount. The primary and SD secondary servers share disk space, so starting an SD secondary is very quick, but an SD secondary server cannot be promoted to a standard server outside of the replication environment or an RS secondary server. An SD secondary server can be used in conjunction with Enterprise Replication, HDR, and RS secondary servers.

When to use an SD secondary server

  • Increased capacity: Multiple SD secondary servers can offload reporting capacity without impacting the primary instance.
  • Primary sever failure backup: In the event of a primary server failure, the SD secondary server can quickly be promoted to the primary server.

In the event of a disk failure, an SD secondary server will not be available as a hot backup. If a hot backup is required, an HDR secondary server or an RS secondary server is recommended.

How SD secondary replication works

Since the primary and SD secondary servers share disk space, no logs need to be passed between the servers. To keep the instances in sync, only the log position is sent.

Starting SD secondary replication for the first time

Configuration parameters that affect SD secondary replication:

  • SDS_ENABLE - Enable or disable an SDS server (required)
  • SDS_TEMPDBS - The temporary dbspace used by an SDS server
  • SDS_PAGING - The paths of two buffer paging files
  • SDS_TIMEOUT - The time, in seconds, that the primary waits for an acknowledgement from an SDS server while performing page flushing before marking the SDS server as down
  • UPDATABLE_SECONDARY - Controls whether secondary servers can accept update, insert, and delete operations
  • TEMPTAB_NOLOG - Controls the default logging mode for temporary tables (required to be set to 1 on SD secondary servers)
Table 3. Steps to set up SDS for the first time
StepOn the primaryOn the secondary
1Set the SDS_TIMEOUT configuration parameter in the onconfig file. -
2 Configure the alias name of the SD primary server:
  • onmode -d set SDS primary alias
-
3- Set the configuration parameters:
  • SDS_ENABLE
  • SDS_PAGING
  • SDS_TEMPDBS
4- Set the following configuration parameters to match those on the primary:
  • ROOTNAME
  • ROOTPATH
  • ROOTOFFSET
  • ROOTSIZE
  • PHYSFILE
  • LOGFILES
  • LOGSIZE
5-Add an entry to the sqlhosts file for the primary server:
  • dbservername nettype hostname servicename
66- Start the SD secondary server:
  • oninit

Check the documentation for further details on command line syntax and options.

Promoting an SD secondary server to primary server

In the event of a primary server failure, promote an SD secondary server to the primary server by issuing one of the following commands:

onmode -d set SDS primary alias
onmode -d make primary

Remote Standalone secondary servers: Introduction

An RS (Remote Standalone) secondary server is very similar to an HDR secondary server. It can be used in a high availability cluster for disaster recovery, contains a complete copy of the database, receives logs in much the same way as HDR, and requires identical hardware and data layout between the primary and secondary servers. With an RS secondary server, the limitation of having a single secondary server has been lifted, providing increased availability.

While an HDR environment and an RS secondary environment are similar, there are two main distinctions:

  • HDR supports both synchronous and asynchronous mode, while an RS secondary supports only asynchronous replication.
  • HDR uses synchronized checkpoints, while an RS secondary server does not.

When to use an RS secondary server

  • Increased server availability: Having multiple RS secondary servers provides greater availability.
  • Geographically distant backup support: By spreading the replicating nodes across multiple geographical locations, the possibility of a single disaster causing an outage is decreased.
  • Improved reporting performance: Multiple RS secondary servers can lessen the impact of reporting on the primary server by offloading reporting to the secondary servers.
  • Availability over unstable networks: In an environment with a slow or unstable network, the RS secondary server's utilization of asynchronous replication means that no delay will be seen on the primary server. Neither transaction commits nor checkpoints are synchronized between the primary and RS secondary servers.

How RS secondary replication works

When Data Manipulation Language (DML) statements are executed against the logged database on the primary server, logical log records are sent to the secondary server. The secondary server applies the logical log records. Updates to the RS secondary server are always asynchronous.

Starting RS secondary replication for the first time

Configuration parameters that affect RS secondary replication:

  • HA_ALIAS - The server alias for a high-availability cluster
  • LOG_INDEX_BUILDS - Enable or disable index page logging (required)
  • UPDATABLE_SECONDARY - Controls whether secondary servers can accept update, insert, and delete operations.
  • FAILOVER_CALLBACK - Specifies the path and program name called when a secondary server transitions to a standard or primary server.
  • TEMPTAB_NOLOG - Controls the default logging mode for temporary tables (required to be set to 1 on RS secondary servers)
Table 4. Steps to set up RS for the first time
StepOn the primaryOn the secondary
1Install and register UDRs, UDTs, and DataBlade modules.Install UDRs, UDTs, and DataBlade modules.
2onmode command:
  • onmode -wf LOG_INDEX_BUILDS=1
-
3onmode command:
  • onmode -d add RSS rss_servername password
-
4ontape or ON-Bar command:
  • ontape -s -L 0
  • onbar -b -L 0
-
5-ontape or ON-Bar command:
  • ontape -p or ontape -p -e
  • onbar -r -p or onbar -r -p -e
6-ontape or onbar command (needed when all logical logs are written on the primary instance since those from Step 1 are no longer on the primary server disk):
  • ontape -l
  • onbar -r -l

Check the documentation for further details on command line syntax and options.

Promoting an RS secondary node to primary

In the event of a primary server failure, promote the RS secondary server to the primary server by issuing the following command:

onmode -d make primary alias

Continuous log restore: Introduction

Continuous log restore can be used to establish a second system (hot backup) available to replace the source/primary system if the source system fails. This feature allows performing a continuous restore of logical log backups using the ontape and ON-Bar utilities. Logical logs backed up on the source system can be restored on the target system as they are available.

A normal log restore restores all of the available log file backups and applies the log records. Transactions that are still open are rolled back in the transaction cleanup phase, and then the server is brought into quiescent mode. While the server is in quiescent mode, no additional logical logs can be restored.

With continuous log restore, the server is put into log restore suspended state after the last available log is restored. The restore client (ontape or ON-Bar) exits and returns control to you. With the server in this state, another logical restore can be started as additional logical logs become available. This cycle can continue indefinitely.

If the source system fails, the remaining available logical logs can be restored on the target system, which can then be brought online and function as the new source system. The version of IDS must be identical on both the source and target systems.

Continuous log restore is designed to assist disaster recovery. It is not designed as a high-availability solution. However, a continuous log restore server can be promoted into an HDR secondary server.

How to set up continuous log restore

Table 5. Steps to set up continuous log restore using ontape
StepOn the sourceOn the target/standby
1ontape -s -L 0-
2-ontape -p
3ontape -a-
4-ontape -l -C
5-ontape -l -X

The following steps provide detailed descriptions of the steps outlined in Table 5 above:

  1. Take a level 0 backup of the source server.
  2. On the target server, perform a physical restore from the level-0 backup created in Step 1 using the same utility that was used for the backup. Do not perform a logical restore.
  3. On the source server, back up the logical using the following command:
    ontape -a
  4. Copy or mount the logical log backup files from the source server to the target server, and rename the files as needed (or use the environment variable IFX_ONTAPE_FILE_PREFIX to set the filename).

    On the target server, perform the logical log restore using the following command. The target server stays in recovery mode.

    ontape -l -C
  5. After all the needed logical logs are restored, run the following commands to complete restoring logical logs and bring the server in quiescent mode:
    ontape -l -X

    Note: The continuous log restore is stopped on the target server after it is changed into quiescent or online mode. You must start a new restore to re-enable continuous log restore again.


Conclusion

After reading this tutorial, you should have a better understanding of the following:

  • The various replication offerings available with IDS 11.50
  • Replication terminology
  • Configuring replication environments

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=428829
ArticleTitle=Informix Dynamic Server 11.50 Fundamentals Exam 555 certification preparation, Part 9: Informix replication technologies
publish-date=09172009