Before you start
This tutorial gives an overview of IDS replication technologies.
About this series
This complimentary series of nine tutorials has been developed to help you prepare for the IBM Informix Dynamic Server 11.50 Fundamentals certification exam (555). This certification exam will test your knowledge of entry-level administration of IDS 11.50, including basic SQL (Structured Query Language), how to install IDS 11.50, how to create databases and database objects, security, transaction isolation, backup and recovery procedures, and data replication technologies and purposes. These tutorials provide a solid base for each section of the exam. However, you should not rely on these tutorials as your only preparation for the exam.
About this tutorial
This tutorials discusses different replication and high availability technologies offered by IDS 11.50. It explains how to configure High Availability Data Replication (HDR), Enterprise Replication (ER), Remote Standalone secondary (RSS) servers, Shared Disk secondary (SDS) servers, and continuous log restore.
This tutorial is designed to help you become familiar with:
- Different replication technologies offered by IDS
- The difference between the various replication technologies
- Different replication terminologies
- How to setup up HDR, ER, RSS, SDS, and continuous log restore
This tutorial is written for IDS database professionals whose skills and experience are at a beginning to intermediate level. You should be comfortable in setting up the sqlhosts file, setting configuration parameters, and starting IDS. You should be familiar with different IDS utilities like onmode, onstat, ontape, ON-bar, and so on.
To run the examples in this tutorial, you need a UNIX computer with IDS installed and to be able to start two IDS instances.
High Availability Data Replication: Introduction
In today's 24X7 online business and enterprise world, high availability of data has become a necessity. A company could lose thousands, if not millions, of dollars when a server is offline. Informix Dynamic Server has a full suite of technologies to provide uninterrupted and continuous services to minimize downtime and maintenance.
Businesses and enterprises can use replication for:
- Capacity relief: You can propagate OLTP data to a secondary site, so that users can be directed to the secondary site for reporting purposes. This can provide more capacity to the OLTP-related users on the primary site.
- High availability: Data is updated at the primary site and replicated to secondary site. In case of failure, the secondary site becomes the primary site.
- Data consolidation: You can consolidate remote data into a central server. For example, you could consolidate data from branch locations.
- Distributed availability: You can distribute information from a central server. For example, you could distribute data from headquarters to branch locations.
- Update anywhere: This gives you a consistent set of data that can be updated at any site in a peer-to-peer fashion.
What is HDR?
HDR provides a mechanism to create and maintain a copy of logged database on one server (primary server) onto a second server (secondary server). In an HDR setup, there is a single primary and a single secondary server. If the primary server fails, an application can connect to the secondary server and continue operations. The goal of high HDR is to minimize or eliminate the impact of a physical server failure. The secondary in an HDR pair is available in read-only mode. Reporting applications and tools can be run against this read-only secondary server, which can help reduce the load on the primary server.
How HDR works
When Data Manipulation Language (DML) statements are executed against the logged database on the primary, logical log records are sent to the secondary server. The secondary server applies the logical log records to keep the status of the database up to date with updates on the primary server. Updates to the secondary can be synchronous or asynchronous. A checkpoint between the primary and secondary database servers is synchronous; that is, a checkpoint on the primary database server completes only after it completes on the secondary database server.
Prerequisites for HDR
The following prerequisites must be met for HDR:
- Identical OS and hardware for both the primary and the secondary servers. It is not possible to set up HDR between two different operating systems.
- Disk layout must be the same for chunks added to each server. Devices hosting database chunks must be created and available on the secondary with the same value for the PATH as the primary server. This can be achieved using symbolic links.
- The version of IDS on both the HDR primary and secondary servers must be identical.
- The database must be logged.
- If blob data types are used, they must be stored in dbspaces. Blob data types stored in blobspaces are not replicated.
- If the root chunk is mirrored on the primary server, it must also be mirrored on the secondary server.
- HDR works with a TCP/IP connection. The database server name (the value of the DBSERVERANME configuration parameters) must be set to a TCP/IP connection in the sqlhosts file.
- Both the primary and secondary server machines must be trusted. Modify the .rhosts or /etc/hosts.equiv for user informix to establish trusted communications.
Configuration parameters that affect HDR
The following configuration parameters affect HDR and its performance:
DRAUTOconfiguration parameter determines what action the secondary server will take in case the primary server fails. This parameter must be set identically on both the primary and secondary servers. Use this parameter cautiously. If there is a temporary network failure, each server could perceive the other server to be down. In this situation, if
DRAUTOis set to 1, the secondary server would convert to a standard server, and the primary server might stop replication. Clients might try to update data on both the servers independently. This can cause the servers to get out of sync. Depending on the
DRAUTOsetting, the secondary will perform one of the following actions:
DRAUTOis set to 0, the secondary will remain read only until it is manually switched to a primary or standard mode.
DRAUTOis set to 1(RETAIN_TYPE), the secondary server automatically switches to a standard server when the primary server fails. The server switches back to a secondary server when the HDR pair is restarted.
DRAUTOis set to 2(REVERSE_TYPE), the secondary server automatically switches to a primary server when the primary server fails. The server switches to primary server (and the original primary server switches to a secondary server) when the HDR pair is restarted.
DRINTERVALspecifies the maximum number of seconds between HDR data buffer flushes. This parameter must be set identically on both the primary and secondary servers.
HDR has two main modes of operation: synchronous or asynchronous. Let's take a look at how updates are propagated from the primary server to the secondary server.
When the primary server starts to flush the contents of the logical-log buffer in shared memory to the logical log on disk, it also copies the contents of the logical-log buffer to a data-replication buffer. The data replication buffers are part of the virtual shared memory that the primary server manages. The data replication buffers are the same size as the logical-log buffers. The primary server then sends contents of the data replication buffer the HDR secondary server, either synchronously or asynchronously. The value of the
DRINTERVALconfiguration parameter determines whether the database server uses synchronous or asynchronous updating.
DRINTERVALis set to -1, updates are synchronous.
DRINTERVALis set to anything other than -1, updates are asynchronous.
HDR synchronous updating: Again, when
DRINTERVALis set to -1, data replication to the HDR secondary server occurs synchronously. As soon as the primary server writes the logical-log buffer contents to the HDR buffer, it sends those records from the buffer to the HDR secondary server. The logical-log buffer flush on the primary server completes only after the primary server receives acknowledgment from the HDR secondary server that the records were received.
HDR asynchronous updating: Again, when
DRINTERVALis set to any value other than -1, data replication occurs asynchronously to the HDR secondary server. The primary server flushes the logical-log buffer after it copies the logical-log buffer contents to the HDR buffer. Independent of that action, the primary server sends the contents of the HDR buffer across the network when one of the following conditions occurs: the HDR buffer becomes full, or the time interval specified by
DRINTERVALon the primary server has elapsed since the last time that the HDR replication buffers have been flushed.
DRTIMEOUTspecifies time interval (in seconds) for which server in the HDR pair waits for transfer acknowledgment from each other. If the checkpoint does not complete within the time that the
DRTIMEOUTconfiguration parameter specifies, the primary server assumes that a failure has occurred. This parameter must be set identically on both the primary and secondary servers.
DRLOSTFOUNDconfiguration parameter specifies the path name to the dr.lostfound.timestamp file. If the primary server does not receive acknowledgment from the secondary server within the time specified by the
DRTIMEOUTconfiguration parameter, it adds the transaction information to a file named by the
ENCRYPT_HDRspecifies if HDR encryption is enabled or disabled.
- 1 = enabled; provides secure method of transferring from one server to other
- 0 = disabled
With added security comes added cost. Extra CPU cycles are used to encrypt and decrypt HDR data.
DRIDXAUTOspecifies whether HDR server automatically starts index replication if index corruption is detected by the secondary server.
- 1 = on; automatically replicate the index
- 0 = off; manually replicate the index
LOG_INDEX_BUILDSspecifies if index page logging is enabled or disabled.
- 1 : Index page logging is enabled. Index pages are copied to the logical log. The primary server sends the index to the secondary server through the logs.
- 0 : Index page logging is disabled. When an index is created on the primary server, that index is transferred page by page to the secondary.
How to set up and manage HDR
Starting HDR for the first time
Before proceeding with the steps to start HDR, enable trusted communication between the primary and secondary server machines for user informix. Update the $INFORMIXSQLHOSTS and /etc/services files for network connections. Make sure that the onconfig files on both the primary and secondary servers are configured correctly. The following configuration parameters must be set to the same values on both the servers.
MIRROROFFSET- if mirroring is in use
MIRRORPATH- if mirroring is in use
Table 1. Steps to set up HDR for the first time
|Step||On the primary||On the secondary|
|1||Install UDRs, UDTs, and DataBlade modules. Register UDRs, UDTs, and DataBlade modules.||Install UDRs, UDTs, and DataBlade modules.|
The following steps provide detailed descriptions of the steps outlined in Table 1 above:
- Install user-defined types, user-defined routines, and DataBlade modules on both servers. Register them on the primary server only.
- Take a level 0 backup of the primary server.
- Run the following command to set the server as
the primary server:
onmode -d primary sec_name
In the above command, replace sec_name with the database server name (the value of the
DBSERVERNAMEconfiguration parameter) of the secondary server. After executing the command, the following messages will be printed in the online.log:
DR: new type = primary server name = sec_name DR: Trying to connect to secondary server DR: Cannot connect to secondary server
- On the secondary server, perform a physical
restore from the level-0 backup created in Step 2 using the same
utility that was used for taking the backup. Do not perform a
- ON-Bar: Use the
onbar -r -pcommand to perform a physical restore.
- ON-Bar and performing an external restore: Use the
onbar -r -p -ecommand to perform the physical restore.
ontape: use the
ontape -poption. You cannot use the
ontape -roption because it performs both a physical and a logical restore.
ontapeand performing an external restore: use the
ontape -p -ecommand to perform the physical restore.
- ON-Bar: Use the
- On the secondary server, run the following
command to set the server as a secondary server:
onmode -d secondary prim_name
In the above command, replace prim_name with the database server name (the value of the
DBSERVERNAMEconfiguration parameter) of the primary server. On the secondary server, logical recovery is done if all the logical logs are still available on the disk; otherwise Step 6 needs to be performed. After executing the command, the following messages will be printed in the online.log of the secondary server:
DR: new type = secondary server name = prim_name
- If logical-log records that were written to
the primary server are no longer on the primary server disk, the
secondary server prompts you to recover these files from tape
backups. After you recover all the logical-log files on tape, the
logical restore completes, using the logical-log files on the
primary server disk.
When HDR setup is completed successfully, the following message will be printed in the online.log of primary server:
DR: Primary server connected DR: Primary server operational
The following message will be printed in the online.log of secondary server:
Secondary server operational
Changing server types
You can change the type of the secondary server to either the primary
server or a standard server. You can only change from a secondary
server to standard server (using the
onmode -d standard command) if
HDR is turned off on the secondary server. HDR is turned off when the
replication connection to the primary server drops or if replication
fails on the secondary server. When you restart a standard server, it
does not attempt to connect to the other server in the replication
pair. With HDR, changing the modes of a server could cause changes to
the other server in the pair. This section discusses what happens when
one of the servers in the HDR pair is brought down. On the primary,
onmode -k command has the
- The secondary
prints a message in the message log:
DR: Receive error. HDR is turned off.
- On the secondary, the effect depends upon the setting of the
DRAUTO= 0, the secondary stays in read-only mode.
DRAUTO= 1, the secondary server switches to standard mode and is available to accept updates.
DRAUTO= 2, the secondary server switches to primary mode as soon as the connection to the old primary is lost.
On the secondary, running the
onmode -k command will cause the primary
server to print the following in the message log:
DR: Turned off on primary server
Enterprise Replication: Introduction
Enterprise Replication (ER) is an asynchronous log-based replication method. Transactions for replication are captured from the logical logs on the source instances after they are committed and sent to the target instance, where they are applied as normal logged transactions. This type of replication has very minimal impact on the source instance. Since the information is read from the logical logs, it does not impact transaction processing. Because replication is asynchronous, the source instance is not waiting for transactions to be applied on the targets before continuing processing.
Enterprise Replication's flexible architecture supports multiple replication methodologies as well as network topologies:
- Replication methodologies:
- Primary-target - Database changes originate at the primary and are replicated to target instances, but changes on target instances are not replicated to the primary instance
- Update-anywhere - Database changes are applied to all instances participating in replication, regardless of their originating server
- Network topologies:
- Fully connected - Continuous connectivity between all participating database servers
- Hierarchical tree - A parent-child configuration that supports continuous and intermittent connectivity
- Forest of trees - Multiple hierarchical trees that connect at the root database servers
Adding to its flexibility, Enterprise Replication can be used in conjunction with HDR, SDS, and RSS replication methods. It can also be used across platforms and different versions of IBM Informix Dynamic Server. Enterprise Replication does not require identical storage to be defined between the instances or even identical table schemas or names.
How Enterprise Replication works
The three phases of Enterprise Replication are listed below, followed by an example with a more detailed description of the phases:
- Data capture
- Data transport
- Applying replicated data
Let's take a look at simple example of replicating a single transaction from a source to a target:
- A client application performs a transaction in a database that is defined as a replicate.
- The transaction is put into the logical log.
- The log capture component, also known as the snoopy component, reads the logical log and passes the log records onto the grouper component.
- The grouper component evaluates the log records for replication and groups them into a message that describes the operations that were in the original transaction.
- The grouper component places the message in the send queue. Under certain situations, the send queue spools messages to disk for temporary storage.
- The send queue transports the replication message across the Enterprise Replication network to the target server.
- The replication message is placed in the receive queue at the target server.
- The data sync component applies the transaction in the target database. If necessary, the data sync component performs conflict resolution.
- An acknowledgment that the message was successfully applied is placed in the acknowledgment queue.
- The acknowledgment message is sent back to the source server.
Starting Enterprise Replication for the first time
Configuration parameters that affect Enterprise Replication:
CDR_EVALTHREADS- The number of evaluator threads per CPU VP and the number of additional threads, separated by a comma (required)
CDR_DSLOCKWAIT- The number of seconds the datasync component waits for database locks (required)
CDR_QUEUEMEM- The maximum amount of memory, in KB, for the send and receive queues (required)
CDR_NIFCOMPRESS- Controls the network interface compression level (required)
CDR_SERIAL- Specifies the incremental size and the starting value of replicated serial columns (required)
CDR_DBSPACE- The dbspace name for the syscdr database (optional)
CDR_QHDR_DBSPACE- The name of the transaction record dbspace; default is the root dbspace (optional)
CDR_QDATA_SBSPACE- The names of sbspaces for spooled transaction data, separated by commas (required)
CDR_MAX_DYNAMIC_LOGS- The maximum number of dynamic log requests that ER can make within one server session (required)
CDR_SUPPRESS_ATSRISWARN- The datasync error and warning code numbers to be suppressed in ATS and RIS files (optional)
Let's take a look at an example of defining replication between two instances with the following characteristics. This example shows update anywhere replication. In update anywhere replication, changes made on any database server are replicated to all the other participating database servers.
CDR_QDATA_SBSPACEnamed qdatasbspace in a cooked file named /u/data/qdatasbspace with an offset of 0 and a size of 200MB
- Source group name grp_er1, target group name grp_er2
- Source DBSERVERNAME er1, target DBSERVERNAME er2
- Source database name primary_db, target database name target_db
- Source table name primary_table, target table name target_table
- Source ATS directory name /u/data/atsdir, target ATS directory name /u/data/atsdir
- Primary RIS directory name /u/data/risdir, target RIS directory name /u/data/risdir
- Replicate name repl1
This example uses a conflict resolution of "timestamp". The available conflict resolution methods are:
- always - Enterprise Replication does not resolve conflicts, but replicated changes will be applied even if the operations are not the same on the source and target servers. Use only in primary target replication.
- ignore - Enterprise Replication does not resolve conflicts.
- timestamp - The row or transaction with the most recent time stamp take precedence in a conflict.
- deletewins - The row or transaction with a
DELETEoperation or, otherwise, with the most recent time stamp, takes precedence in a conflict. The deletewins conflict resolution rule prevents upserts.
Table 2. Steps to set up ER for the first time
|Step||On the source||On the target|
|1|| Use onspaces to create the sbspace for spooled transaction
data, specified by || Use onspaces to define the sbspace for |
|2|| Modify the onconfig file to set || Modify onconfig file with the name of the |
|3|| Configure the sqlhosts file to contain connections for both
the source and the target:
|| Configure sqlhosts file to contain connections for both the
source and the target:
|4|| Define both the source and target servers for replication:
|5|| Define a replicate:
|6|| Start the replicate:
This is a very simple example of setting up replication. Read the documentation for further details on command line syntax and options.
Shared Disk secondary: Introduction
In Shared Disk (SD) secondary replication, the primary and SD secondary servers share disk space through a high-availability cluster configuration. In this configuration, no physical copy of the database is stored on the SD secondary server. If the primary and SD secondary severs exist on the same machine, they can both access the local disk. If they reside on different physical machines, they must be configured to use shared disk devices. Do not configure the primary and SD secondary servers to use operating system buffering such as an NFS mount. The primary and SD secondary servers share disk space, so starting an SD secondary is very quick, but an SD secondary server cannot be promoted to a standard server outside of the replication environment or an RS secondary server. An SD secondary server can be used in conjunction with Enterprise Replication, HDR, and RS secondary servers.
When to use an SD secondary server
- Increased capacity: Multiple SD secondary servers can offload reporting capacity without impacting the primary instance.
- Primary sever failure backup: In the event of a primary server failure, the SD secondary server can quickly be promoted to the primary server.
In the event of a disk failure, an SD secondary server will not be available as a hot backup. If a hot backup is required, an HDR secondary server or an RS secondary server is recommended.
How SD secondary replication works
Since the primary and SD secondary servers share disk space, no logs need to be passed between the servers. To keep the instances in sync, only the log position is sent.
Starting SD secondary replication for the first time
Configuration parameters that affect SD secondary replication:
SDS_ENABLE- Enable or disable an SDS server (required)
SDS_TEMPDBS- The temporary dbspace used by an SDS server
SDS_PAGING- The paths of two buffer paging files
SDS_TIMEOUT- The time, in seconds, that the primary waits for an acknowledgement from an SDS server while performing page flushing before marking the SDS server as down
UPDATABLE_SECONDARY- Controls whether secondary servers can accept update, insert, and delete operations
TEMPTAB_NOLOG- Controls the default logging mode for temporary tables (required to be set to 1 on SD secondary servers)
Table 3. Steps to set up SDS for the first time
|Step||On the primary||On the secondary|
|1||Set the ||-|
|2|| Configure the alias name of the SD primary server:
|3||-|| Set the configuration parameters:
|4||-|| Set the following configuration parameters to match those on
|5||-||Add an entry to the sqlhosts file for the primary server:
|66||-|| Start the SD secondary server:
Check the documentation for further details on command line syntax and options.
Promoting an SD secondary server to primary server
In the event of a primary server failure, promote an SD secondary server to the primary server by issuing one of the following commands:
onmode -d set SDS primary alias
onmode -d make primary
Remote Standalone secondary servers: Introduction
An RS (Remote Standalone) secondary server is very similar to an HDR secondary server. It can be used in a high availability cluster for disaster recovery, contains a complete copy of the database, receives logs in much the same way as HDR, and requires identical hardware and data layout between the primary and secondary servers. With an RS secondary server, the limitation of having a single secondary server has been lifted, providing increased availability.
While an HDR environment and an RS secondary environment are similar, there are two main distinctions:
- HDR supports both synchronous and asynchronous mode, while an RS secondary supports only asynchronous replication.
- HDR uses synchronized checkpoints, while an RS secondary server does not.
When to use an RS secondary server
- Increased server availability: Having multiple RS secondary servers provides greater availability.
- Geographically distant backup support: By spreading the replicating nodes across multiple geographical locations, the possibility of a single disaster causing an outage is decreased.
- Improved reporting performance: Multiple RS secondary servers can lessen the impact of reporting on the primary server by offloading reporting to the secondary servers.
- Availability over unstable networks: In an environment with a slow or unstable network, the RS secondary server's utilization of asynchronous replication means that no delay will be seen on the primary server. Neither transaction commits nor checkpoints are synchronized between the primary and RS secondary servers.
How RS secondary replication works
When Data Manipulation Language (DML) statements are executed against the logged database on the primary server, logical log records are sent to the secondary server. The secondary server applies the logical log records. Updates to the RS secondary server are always asynchronous.
Starting RS secondary replication for the first time
Configuration parameters that affect RS secondary replication:
HA_ALIAS- The server alias for a high-availability cluster
LOG_INDEX_BUILDS- Enable or disable index page logging (required)
UPDATABLE_SECONDARY- Controls whether secondary servers can accept update, insert, and delete operations.
FAILOVER_CALLBACK- Specifies the path and program name called when a secondary server transitions to a standard or primary server.
TEMPTAB_NOLOG- Controls the default logging mode for temporary tables (required to be set to 1 on RS secondary servers)
Table 4. Steps to set up RS for the first time
|Step||On the primary||On the secondary|
|1||Install and register UDRs, UDTs, and DataBlade modules.||Install UDRs, UDTs, and DataBlade modules.|
Check the documentation for further details on command line syntax and options.
Promoting an RS secondary node to primary
In the event of a primary server failure, promote the RS secondary server to the primary server by issuing the following command:
onmode -d make primary alias
Continuous log restore: Introduction
Continuous log restore can be used to establish a second system (hot backup) available to replace the source/primary system if the source system fails. This feature allows performing a continuous restore of logical log backups using the ontape and ON-Bar utilities. Logical logs backed up on the source system can be restored on the target system as they are available.
A normal log restore restores all of the available log file backups and applies the log records. Transactions that are still open are rolled back in the transaction cleanup phase, and then the server is brought into quiescent mode. While the server is in quiescent mode, no additional logical logs can be restored.
With continuous log restore, the server is put into log restore suspended state after the last available log is restored. The restore client (ontape or ON-Bar) exits and returns control to you. With the server in this state, another logical restore can be started as additional logical logs become available. This cycle can continue indefinitely.
If the source system fails, the remaining available logical logs can be restored on the target system, which can then be brought online and function as the new source system. The version of IDS must be identical on both the source and target systems.
Continuous log restore is designed to assist disaster recovery. It is not designed as a high-availability solution. However, a continuous log restore server can be promoted into an HDR secondary server.
How to set up continuous log restore
Table 5. Steps to set up continuous log restore using ontape
|Step||On the source||On the target/standby|
The following steps provide detailed descriptions of the steps outlined in Table 5 above:
- Take a level 0 backup of the source server.
- On the target server, perform a physical restore from the level-0 backup created in Step 1 using the same utility that was used for the backup. Do not perform a logical restore.
- On the source server, back up the logical using
the following command:
- Copy or mount the logical log backup files from
the source server to the target server, and rename the files as needed
(or use the environment variable
IFX_ONTAPE_FILE_PREFIXto set the filename).
On the target server, perform the logical log restore using the following command. The target server stays in recovery mode.
ontape -l -C
- After all the needed logical logs are restored,
run the following commands to complete restoring logical logs and
bring the server in quiescent mode:
ontape -l -X
Note: The continuous log restore is stopped on the target server after it is changed into quiescent or online mode. You must start a new restore to re-enable continuous log restore again.
After reading this tutorial, you should have a better understanding of the following:
- The various replication offerings available with IDS 11.50
- Replication terminology
- Configuring replication environments
- developerWorks Informix zone: Get the resources you need to advance your skills in the Informix arena.
- "The IDS Detective Game" (developerWorks, April 2008): Learn or teach the basics of Informix Dynamic Server (IDS) and relational databases with an interactive game called "The IDS Detective Game".
- IDS roadmap for administrators, developers, and end users: Find resources for all aspects of IDS—planning, installing, configuring, administering, tuning, monitoring, and more.
- Informix Education Training Path: See the courses you need to take to achieve particular skills or certification.
- Informix library: Learn more details about IDS from the online manuals or the IDS Information Center.
- IBM Informix Dynamic Server 11.50 Information Center: Find information that you need to use the IDS family of products and features.
- developerWorks Information Management zone: Learn more about Information Management. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
Get products and technologies
- Informix Dynamic Server Express Edition: Download a trial version of Informix Dynamic Server Express Edition to get started with IDS.
- Informix Dynamic Server Enterprise and Developer Edition: Download a free trial version of Informix Dynamic Server Enterprise or Developer Edition.
- Download IBM product evaluation versions or explore the online trials in the IBM SOA Sandbox, and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
- Participate in the discussion forum.
- IDS Experts Blog: Read the technical notes on Informix Dynamic Server from a worldwide team of Development and Technical Support engineers.
- Participate in developerWorks blogs and get involved in the My developerWorks community; with your personal profile and custom home page, you can tailor developerWorks to your interests and interact with other developerWorks users.