DB2 Universal Database and the Highly Available Data Store

Critical database applications demand a robust strategy for the preventing data loss and guaranteeing high availability of your data store. This article surveys your options for high availability on Linux, UNIX and Windows platforms.

Paul Zikopoulos (paulz_ibm@msn.com), Database Global Sales Support team, IBM

Photo: Paul ZikopoulosPaul C. Zikopoulos, BA, MBA, is an award-winning writer and speaker with the IBM Database Global Sales Support team. He has more than seven years of experience with DB2 UDB and has written numerous magazine articles and books about it. Paul has co-authored the books: DB2 Version 8: The Official Guide, DB2: The Complete Reference, DB2 Fundamentals Certification for Dummies, DB2 for Dummies, and A DBA's Guide to Databases on Linux. Paul is a DB2 Certified Advanced Technical Expert (DRDA and Cluster/EEE) and a DB2 Certified Solutions Expert (Business Intelligence and Database Administration). You can reach him at: paulz_ibm@msn.com.



James StittleIBM

James Stittle joined the IBM Toronto Lab team in 1982. He has worked in a variety of roles within the development community. Jim joined the DB2 for distributed platforms project in 1990 and has been part of the Worldwide DB2 UDB support organization for the last 8 years. He has spoken at many conferences, including IDUG and DB2 Tech. Someday he hopes to grow up and finish building his airplane (an Osprey 2).



Roman Melnyk, Ph.D. (roman_b_melnyk@hotmail.com), Senior Member of the DB2 Information Development team, IBM

Photo: Roman MelnykRoman B. Melnyk, PhD, is a senior member of the DB2 Information Development team, specializing in database administration, DB2 utilities, and SQL. During more than nine years at IBM, Roman has written numerous DB2 books, articles, and other related materials. Roman co-authored DB2 Version 8: The Official Guide, DB2: The Complete Reference, DB2 Fundamentals Certification for Dummies, and DB2 for Dummies. You can reach him at roman_b_melnyk@hotmail.com.



09 October 2003

Introduction

High availability (HA) is a desirable characteristic for most computing systems. A robust HA solution minimizes down time, an essential capability if you want to eliminate the negative financial impact of system outages.

The Standish Group International, a market research firm specializing in mission-critical software and e-commerce, conducted a study to try and quantify the cost of data outages. They estimated that for a transaction processing system, the cost associated with an outage is approximately $2,500 (1) per minute, which translates to $150,000 per hour. In fact, the cost associated with an outage during peak business hours was estimated at $7,800 per minute, or $468,000 per hour!

It isn't just online auction houses and booksellers that need their systems running all the time either. The Standish Group study also looked at the associated outage costs for data warehousing systems and found it to be an astonishing $5,800 per minute - and $6,300 per minute during peak loads when decision-support deliverables are being generated. Because customer relationship management, real-time risk assessments, supply chain management, and enterprise resource planning applications are tied into a company's data stream, you can begin to see how the requirement for business data has changed the way that companies operate.

Although these numbers likely represent a rather conservative view of how much money would be lost due to an outage, they do provide some idea of why we are very focused these days on keeping systems up and running around the clock, 24-by-7.

But why is the opportunity cost of downed data so high? Today, more than ever, data has become a corporation's lifeline. It's the raw ingredient of business intelligence and most of the activities that surround the business. So, the moment data access is lost, the business stifles and valuable resources (both human and capital investment) become inefficient or unavailable.

The business marketplace is also a much different entity than it was just three short years ago. Customers demand your services at their convenience. The impact of this 'can't wait for you' phenomenon has been compounded by the proliferation of a wide assortment of devices with Internet connectivity: always connected, always ready. Politics have also increased the demands for available data. With mandatory real time risk assessment models like Basel II, to strict standards with respect to how medical information has to be stored like HIPPAA, the information asset is more and more a hardened part of the day-to-day operations of a business.

At its core, available data starts with an available disk subsystem, networking layer, and storage architecture. However, the 'smarts' that can be leveraged from your data assets is dependent on the data management software you choose. IBM® DB2® Universal Database™ (DB2 UDB) is built for reliability and availability, so it's up when you are open for business; on the Internet, that means all the time.

There are two types of downtime that affect the ak,,k;kvailability of your business data. Planned outages typically refers to those operations that make the data unavailable as a result of maintenance-like activities such as data loading, reorganizations, statistics collection, and so on. DB2 UDB Version 8.1 is steeped with features that virtually eliminate almost all of the planned outages for management activities.

Unplanned outages refers to those events that you cannot plan for: disaster, a downed server, etc. This article will focus on the key approaches to reducing downtime associated with unplanned outages, comparing and contrasting their respective advantages and disadvantages


A high availability road map

High availability (HA) for unplanned outages in the database world encompasses the following three focus areas:

  • The database itself. You want a database that stays up, is fast, scalable, has good query and utility performance, avoids software-based corruption, provides fast recovery from any kind of failure, and has facilities for online, granular maintenance.
  • Hardware and software redundancy. What do you do when the hardware or the operating system fails? We will discuss clustering techniques for enabling standby machines to take over when a production machine fails.
  • Disaster recovery. What do you do if a whole site fails? What do you do if some unforeseen disaster (natural or man-made) occurs? How do you keep your business up and running?

High availability clustering for unplanned outages

What happens when a system fails? By far, the most common answer is to have another system waiting to assume the failed system's workload and continue business operations. The concept of clustering for high availability encompasses the notion of a number of interconnected machines (referred to as an availability cluster (2) ). If one of those machines should fail, the resources required to maintain business operations are transferred to another available machine in the cluster.

There are several key items that you must plan for when setting up a cluster:

  • The disks used to store the data have to be connected by a private interconnect or LAN to the servers that make up the cluster. For example, in a two-node cluster, if machine 1 were to fail, machine 2 has to have both physical and logical access to machine 1's disks.
  • A method by which there is an automatic detection of a failed resource. In clustering terminology, the daemon or process that drives this automatic detection is referred to as a heartbeat monitor. Using a detection process that checks on a resource's heartbeat, its failover partner can ensure that it is up and running and operations can continue as usual. If a heartbeat isn't detected, then another server in the cluster can initiate the transfer of its resources to another machine and resume business operations.
  • The automatic transfer of resource ownership to one or more surviving cluster members. For example, if users are connected to a failed machine, you want that failed machine's IP address transferred to a surviving server so client applications can connect to it without intervention or a change to the application itself. After failover, a simple reconnection to the same IP address would now run on a server in the backup cluster.

High availability clusters are available for a wide array of today's most popular operating systems. Table 1 lists the clustering software support on DB2 UDB v8.1.

Table 1 - High availability software and DB2 UDB

Operating SystemSupported Clustering Software
AIX®High Availability Cluster Multiprocessing (HACMP)® High Availability Geographic Cluster (HAGEO
HP-UXMC/Service Guard
LinuxSteeleye LifekeeperVeritas Cluster ServerLegato ClusterTivoli® automation for LinuxMission Critical Linx Convolo Cluster Dataguard Edition Open source: Linux Heartbeat (3)
Sun SolarisSun ClusterVeritas Cluster Server®
WindowsTMMicrosoft Cluster Services (MSCS)TM

DB2 UDB ships special scripts and commands that are tailored to each vendor's clustering software on all the platforms that DB2 UDB supports. These scripts and commands allow database administrators (DBAs) to deploy a wide array of flexible clustering scenarios for high availability clustering. Failover clustering can be implemented in DB2 UDB in both partitioned and non-partitioned databases.

There are a number of factors that DBAs should consider when implementing a failover cluster. First, consider the amount of time it takes for a failure to be detected and the transfer time associated with new resource assignments to the backup server. Special consideration should be taken into account when defining the heartbeat. For example, you would not want to implement a scenario where normal network latency or application performance would identify a failed server that hadn't actually failed. The next consideration is the amount of time it takes for the database to recover on the backup server. This is likely the largest factor when trying to configure an available environment. Finally, DBAs should be aware of the amount of time it take for the application to time out and reconnect to the database server.

HA clustering configurations

Figure 1 shows the most common high availability cluster configurations. There are various vendors that support high availability clustering, and all of these vendors listed in Table 1 support the following four configurations:

Figure 1 - The most common HA configurations
Figure 1 - The most common HA configurations

Note that DB2 UDB has special pricing for high availability. You can learn more by reading the article

Licensing DB2 UDB Version 8.1 in a High Availability Configuration

by Paul Zikopoulos.

Now let's take a look at what is offered by each of these configuration options.

Idle standby
Consider the four-node database cluster shown in Figure 2. There are four physical machines in the cluster, each with one partition. In this configuration, one machine (the far right machine in Figure 2) sits idle, ready to take over if another machine fails.

The advantage of idle standby is that after a failover, you will have three machines working, which is the same number of machines you had before, so you'll see no performance degradation after the failover.

The disadvantage of this approach is that it can seem to be expensive: you have a machine that is just sitting there waiting for another machine to fail. However, it is worth noting that you could use this machine for other purposes during its idle time, like development, testing, etc. As well, a certain amount of work has to go into cabling all of the systems together. Despite a perceived expense, the competitive cost of DB2 UDB makes it actually more affordable (including server and software costs) to implement an idle standby system than the active / active system associated with a popular clustered database available in the market today.

Figure 2 shows a detailed idle standby high availability solution. In this case, three machines are working, with a fourth machine on standby waiting to take over. When machine number 3 experiences a failure and becomes unavailable, its workload moves over to the idle machine. The previously idle machine takes on the workload and IP address of machine number 3.

End users that were connected to the failed machine (in this case machine number 3) will lose their existing connections. However, the IP address of machine number 3 has been taken by machine number 4. When applications issue a new CONNECT statement to connect to machine number 3, they will be connected to machine number 4. This connection move is transparent to them. They will not know that they have been moved over.

End users on the other machines are not at all affected by this failure. They continue to be able to access data on partition number 1 and partition number 2. End users on those partitions, if they are not attempting to access data on partition number 3, will not be affected in any way.

Figure 2 - An idle standby clustering solution
Figure 2 - An idle standby clustering solution

As its name would suggest, an idle standby configuration is an example of an active/idle setup with respect to DB2 UDB licensing terms surrounding high availability.

Active standby
An active standby configuration is similar to an idle standby, except that the failover machine is active. After a failure, the workload transfers over to the other machine. Depending on the active standby's workload, response times may or may not deteriorate. For example, we often see instances where an active machine is performing some sort of lower priority work (for example, reporting) and when a failure occurs, the lower priority work is halted to maintain response times that are consistent with the failed server. As its name would suggest, an active standby configuration is an example of an active/active setup with respect to DB2 UDB licensing terms surrounding high availability.

Mutual takeover
In a mutual takeover configuration, shown in Figure 3, there are four active machines doing production work. The advantage to this setup is that there is no idle hardware. The disadvantage in this scenario is that after a failure, there are only three machines working on the production workload, whereas previously there were four, so response times are more likely to increase.

The red arrows in Figure 3 represent the cluster series definition for mutual takeover. In cluster number 1, partition number 1 would go to machine number 2 following a failure. If machine number 2 were to fail, partition number 2 would go to machine number 1. The same could be said for cluster number 2 between machine numbers 3 and 4 and their respective database partitions.

One thing that is important to note here is that DB2 UDB recognizes one cluster in this scenario, a cluster consisting of four machines. This could be confusing because you have likely noted that there are two clusters in Figure 3. Why? These are high availability clusters. In this case, we've defined two independent clusters. Cluster number 1 has two machines in it, as does cluster number 2. They are independent with respect to DB2 UDB - even if it is a partitioned database system.

This highlights one of the advantages of a mutual takeover configuration: you can replicate the high availability failover scenario without having to worry about the size of the database cluster. For example, consider a scenario of 125 clusters, each with two nodes, within one database. The database has 250 nodes, each in a high availability environment, each of them within an HA cluster of only two nodes. The storage interconnect in Figure 3 only needs to connect two systems at any one time - hence the benefit. These types of interconnects are relatively inexpensive and fairly popular when compared to the potential cabling nightmare of linking up all the machines as in Figure 3.

Figure 3 - A mutual takeover clustering solution
Figure 3 - A mutual takeover clustering solution

As you can see in Figure 3, the logical definitions that define partition number 4 have now moved to machine number 3. After the takeover has happened, machine number 3 is handling the workload of both partition number 3 and partition number 4. This demonstrates why the disks that partition number 4 was sitting on must now be accessible from machine number 3.

The mutual takeover configuration in Figure 3 is an example of an active/active setup with respect to DB2 UDB licensing terms surrounding high availability.

Balanced mutual takeover
This is a special kind of mutual takeover in another popular configuration that we see used in many partitioned database environments. In this configuration, one machine distributes its workload to the other machines in a balanced fashion. This approach requires some work in setting up your partitioned database such that the number of logical database partitions (known as Multiple Logical Nodes, MLNs) on any machine is equal to the number of physical servers in your environment, minus 1. Four physical servers would require three MLN's per server, for a total of 12 DB2 partitions.

A balanced mutual takeover configuration is shown in Figure 4.

Figure 4 - A balanced mutual takeover clustering solution
Figure 4 - A balanced mutual takeover clustering solution

In the mutual takeover example (Figure 3), your configuration was left with an unbalanced system after the failure. Think about it. You had four nodes, each with one partition running underneath it, but after the takeover, one of those nodes had two partitions, so it was unbalanced. Cluster 1 would outperform Cluster 2 hands-down.

In the case of a balanced mutual takeover, you can see that you end up with the same number of partitions on each machine, thereby spreading the effects of the failure across the system. This is very important. Will this mean that all transactions now run slower across the entire system? Perhaps. If this is an issue, then you may want to use a configuration that is active/idle. However, many IT departments sign up to honor what is called a Service Level Agreement (SLA). In a finely tuned environment, the production system at full capacity may blow away the SLA targets. This approach allows DBAs to configure environments that can remain within the SLA zone after a failure by spreading out the production workload equally across a number of machines.

In Figure 4, machine number 1 has partitions 1, 2, and 3; machine number 2 has partitions 4, 5, and 6; machine number 3 has partitions 7, 8, and 9; and machine number 4 has partitions 10, 11, and 12. After a failure on machine number 4, partition 10 is moved to machine number 3, partition 11 is moved to machine number 2, and partition 12 is moved to machine number 1.

A disadvantage of this approach soon becomes apparent, however. This configuration requires that your storage interconnect have access to all four systems. In the mutual takeover example, a storage interconnect was only needed between two systems. This configuration is a bit more complex to set up too; however, after a failure, you end up with a balanced system. This does not necessarily mean that you will get the same performance that you had before the failure when you had four systems. Now you only have three systems. But the performance will be balanced, each machine carrying an equal load.

A balanced mutual takeover configuration is an example of an active/active setup with respect to DB2 UDB licensing terms surrounding high availability. However, it could be set up as an active/idle implementation too.


Disaster recovery

The previous section dealt with unplanned outages as a result of some sort of network or hardware failure. However, there are other categories of unplanned outage that DBAs must take into consideration when designing fault tolerant data stores. More attention must be paid to disaster recovery strategies.

Disaster recovery can encompass many areas. For example, natural disasters such as floods, hurricanes, or earthquakes can affect a data center. Infrastructure disasters, such as power failures, fires, or water main breaks can occur. Operational disasters, such as viruses, human error, or sabotage must also be taken into consideration.

The site may fail, but with proper planning, the data can be saved and operations resumed elsewhere. This section will detail the different approaches that DBAs can take to safeguard their systems from physical interruptions.

Transportation of database backups

The easiest way to implement a disaster recovery strategy is to simply transfer your backups to another machine and restore your databases from them. We won't cover that approach here since it a pretty simple concept.

DBAs who favor this approach like it because it is a low cost option, and you can recover everything up to the last database backup. Of course, all the transactions since you last took a backup will be lost. This may or may not be a problem for your environment. DB2 UDB v7.2 introduced many new backup features, like cumulative and delta backups, that can make this process easier - although you have to have access to all of the generated files.

You can enhance this approach by transporting the logs as well. Then, on another server, you can roll forward through the logs from the last backup date through to the end of the logs. This approach lets you recover more data; however, it lengthens the recovery time, because you have to roll foward though all of the database logs from the last backup. Moreover, you don't have any way to capture those transactions which are living in the active log.

Log shipping to a standby database

Before we describe what log shipping is, we want to make it clear that DB2 UDB does indeed support log shipping. There is a misconception in the marketplace that DB2 UDB does not support this data protection approach.

Log shipping is perhaps the most common way to support a standby database. What is a standby database? A standby database is a database at a remote site. It is a complete copy of the production database. Whole logs are copied over to that standby database on an ongoing basis. Logs could be moved over to the standby machine via an archive solution that supports this feature, a custom script that uses the FTP protocol, or the native user exit program that is supplied with DB2 UDB to support log shipping.

The standby database continuously rolls forward through the logs, ensuring that it is current up to the last successfully shipped log file. When the primary database fails, you simply copy over any logs that are left, roll forward to the end of the logs, and stop. Clients then reconnect to the standby database. This process is shown in Figure 5.

Figure 5 - Log shipping and DB2 UDB
Figure 5 - Log shipping and DB2 UDB

Log shipping is very simple to set up and use. This is clearly one of the main advantages to this approach to disaster recovery. It doesn't require any extra software, and can be implemented natively using all the features that come with DB2 UDB. It has minimal impact to a production system and can be implemented in such a way as to provide a no transaction loss system.

Those of you who are worried about the criticality of your data will have some concerns with this approach. For example, assume the production site was actually destroyed as a result of some sort of natural disaster. Even worse, let's assume that the disaster occurred at the very moment a full log was finished, but just before it was shipped to the disaster recovery site. In this example, the standby database would be out that full log, as well as any transactions in the active log. If you cannot afford to be a 'log out' of data, log shipping can be enhanced using mirroring techniques (discussed below). Of course, you can configure the logs so that the number of transactions they contain is limited, but this approach could result in a performance penalty, and only minimizes the impact of missing transactions. You could still be missing critical transactions.

Another disadvantage of log shipping is that the standby database must be physically and logically identical to the production database. It also cannot be used for read operations, because it is in rollforward pending state. Finally, some administrative changes (for example, creating an index) are not reflected in the logs.

Log shipping to a standby database with a zero transaction loss goal
But what if you don't have the luxury of entertaining a 'log lag' with respect to the timeliness of your data?

To address this concern while maintaining the benefits of log shipping, many systems today deploy some sort of proprietary mirroring system for their logs, such as, for example, IBM's Peer to Peer Remote Copy (PPRC)®. Figure 6 builds on Figure 5 with this type of feature. Instead of shipping the logs as soon as they are archived, the logs are mirrored. In this case, PPRC implements disk mirroring, but only for the logs. The advantage of this approach is that the mirrors contain the data; if the production database site is indeed destroyed, you have zero transaction loss. DB2 UDB has supported log mirroring since Version 7.2.

Figure 6 - Log shipping and DB2 UDB with log mirroring
Figure 6 - Log shipping and DB2 UDB with log mirroring

You can also use the new I/O suspend feature, also known as split/mirror support, to quickly initialize the standby database. It is often necessary to frequently initialize the standby database. A scenario in which you are making frequent, unlogged data definition language (DDL) changes means that you need to create new standby databases before logs can be applied. DB2's I/O suspend feature makes this painless. Another feature associated with I/O suspend is the capability of offloading production system backups to the standby database. DB2 supports the capability of creating a standby database and then using that database to create a backup image. This relieves the production system from the workload associated with creating backups.

Log shipping can also be used to protect your data from various types of human error. For example, a DBA could introduce a delay into the log rollforward process. They might only roll the logs forward to the start of the business day. This will give users on the production database a chance to recognize that they may have made an error; perhaps they ran a large delete operation that should not have been run, or even accidentally dropped a table. Because of the delay, there is a chance to recover from such errors prior to invoking the next rollforward operation, thereby safeguarding your data against human error.

The Basics of Log Shipping by Dale McInnis is an excellent introduction to log shipping with DB2 UDB. We suggest that you read this article if you are interested in this common approach to safeguarding your data.

Replicating to a standby database

Another technique for maintaining a standby database is replication, as shown in Figure 7. DB2 UDB on distributed platforms comes with replication built into the base server. Replication in DB2 UDB is a log-based, not a trigger-based technique, which makes is pretty agile and quick at getting the data to another server.

Figure 7 - Using the CAPTURE and APPLY process built into DB2 UDB to replicate data to the standby database
Figure 7 - Using the CAPTURE and APPLY process built into DB2 UDB to replicate data to the standby database

The advantage of using replication over log shipping is that the standby database is available for read/write (R/W) access. In the log shipping scenarios, because the database is in rollforward pending state, users cannot query it. It may be beneficial to leverage a standby database as a reporting tool until it is needed to act as the production server.

As well, because you are in effect replicating the data, which is a replay of the logs on the standby database, the underlying hardware or operating system does not need to be the same as the production database. For example, you may be hosting your production system on a bullet proof proprietary UNIX® system, but choose to leverage some low cost commodity hardware with a Windows-based reporting tool on the standby machine.

Another feature that some DBAs like about using replication is that DB2 UDB supports bi-directional replication. This means that an application could update data on the standby database and this could be replayed on the production database - with full conflict resolution strategies in place to handle data update discrepancies.

Finally, you don't have to replicate the whole database. Because replication is enabled in DB2 UDB at the table level, you could identify some critical tables (for example, transaction tables) to replicate, while leaving tables that can be quickly reloaded (for example, summary or reporting tables) alone.

A disadvantage of using replication to maintain a standby database is that replication, by its nature, is asynchronous. This could result in some transactions not being replayed in the event of a site disaster. Worse, you may not know which transactions have not been replayed because you may not know how deep the queue was before the replication cycle. In addition to this, and like in the log shipping case, some administrative operations (such as create index) are not reflected in the logs.

Replication was designed to solve a different problem than failover for disaster recovery scenarios. This design point means that there is considerable overhead associated with replication that does not exist with log shipping. Reading the logs and creating the staging tables (CAPTURE component) adds a workload to the production server. If you are using replication for its intended purpose, then extending its use for disaster recovery may make sense. Always keep in mind that replication does not guarantee a definable number of transactions sitting in the staging tables waiting to be applied on the target server. This means that someone will have to run tests to determine what is likely to be the maximum number of transactions that could be lost. The business, in accepting this maximum, will also have to accept the possibility that when a real failure happens, the estimate turns out not to be correct. Did the workload change, and there was no additional testing? If this happens, can you maintain confidence in the estimated transaction loss? With log shipping, it could be the size of a log, or better yet, use of technology such as PPRC, that guarantees no transaction loss.

Remote mirroring

As the demands for availability mount, both software and hardware vendors have been swift to respond with products and applications that suit the needs of a 24-by-7, no exception business. Combining each solution's responses with availability can greatly benefit those businesses where availability concerns outweigh costs. Quite simply, for many, the cost of being down is too high.

Perhaps the ultimate solution would be to remote mirror everything synchronously: the data and the logs, as shown in Figure 8. Remote mirroring takes the logs and data across the 'brick-and-mortar' or local disaster site (it isn't much of a disaster recovery plan if both the production and the standby servers are in two buildings next to each other). There are a number of approaches to remote mirroring.

Figure 8 - Using remote mirroring for disaster recovery
Figure 8 - Using remote mirroring for disaster recovery

Combining availability-focused hardware and software can deliver the panacea of availability. For example, IBM Enterprise Storage Server® (ESS) and RAMAC® Virtual Array systems offer the ability to mirror both the data and the logs via the storage subsystem. While this is a more expensive approach, it is a sure-fire way to attain the highest level of availability. Other vendors' implementations of this type of "no data loss technology" include CLARiiON with SRDF by EMC and Hitachi's HDS.

With this approach, all data associated with the database is mirrored at the remote site. In the case of a failure, the mirrored disks are mounted on the remote site. DB2 is started and performs crash recovery. This is a simple approach, but can be expensive, for the following reasons:

  • The cost of transmitting all of the changed data to the disaster site. All changes written to the database are sent across the wire.
  • The cost associated with the size of the system. Because write activities to the disaster recovery site need to be synchronous, you might experience delays in the commit activities at the primary site. Data can only travel so fast, and the farther apart the two sites are, the longer it will take for a write to make the round trip. To minimize this problem, you may need to have a larger than ideal physical environment. Multiple storage area network (SAN) systems, each making its own remote mirrors, would help to spread out the write activity. Of course, you'll also need sufficient communication bandwidth for each SAN.

Summary

DB2 UDB makes it easy to manage large databases with excellent availability characteristics. There is great flexibility in how you can build and manage your database from both hardware and software perspectives. High availability has become a key factor in your business operations. Each approach has its associated advantages and disadvantages. We hope that this article gave you a good high level overview of the options available to you in a DB2 UDB environment.

No matter what availability solution you choose, no matter how much money you spend, or what kind of skill set your department has, there is one key element that is critical to your high availability plans: practice, practice, practice. Don't let an actual crash or downed server be the first time you find out if your "house is in order".


Footnotes

1 Costs are estimated in US dollars.

2 The term cluster has become somewhat overloaded these days. For purposes of this article, clusters (or clustering) refers to configurations implemented to maintain high availability. There is also the notion of clustering for scalability. An example of this is the ability to partition a database across multiple servers.

3 Limited to 2-way high availability clusters.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=14258
ArticleTitle=DB2 Universal Database and the Highly Available Data Store
publish-date=10092003