IBM WebSphere Developer Technical Journal: Locking strategies for database access

Database locking strategies is one of the most complex areas of application development -- and one of the most critical for reliable application performance and behavior. The basic issues, common strategies, and general recommendations on when and how to use different locking strategies are outlined in this article.

Paul Ilechko (paul.ilechko@us.ibm.com), Senior Solutions Architect, IBM

Paul Ilechko is a Senior Solutions Architect with IBM Software Services for WebSphere. Mr. Ilechko has over 25 years of experience in the IT Industry, including a background in both mainframe and distributed technologies. He has been involved with WebSphere and J2EE technology almost since their inception. His primary goal is to help IBM clients be successful with these products. Mr. Ilechko has a B.Sc. in Mathematics from the University of London.



22 March 2006

Also available in Chinese

From the IBM WebSphere Developer Technical Journal.

Introduction

One of the most complex areas for application developers of multi-user, server-based systems to deal with is the impact of locking strategies. Implemented badly, this can lead to all types of issues, from performance problems to deadlocks to unexpected application behavior. The purpose of this article is to lay out the basic issues, common strategies and patterns, and some recommendations as to when and how to use locking strategies.

This article assumes that the developer has control of the SQL being used, and does not fully address the specifics of using an Object Relational (OR) mapping tool, which may add additional complexity.


Transaction vs. session locking

Database management systems provide for locking within the scope of a transaction. Read or update locks can be acquired to prevent inconsistent behavior in the application; the actual locks in use depend on the type of SQL statement used by the developer and the isolation level specified on the database connection. However, database locking in and of itself may not be sufficient. In some scenarios, a longer term "logical lock" may need to be held to prevent changes to data that is presently being worked on by a user. This will be referred to as session locking to distinguish it from the explicit database locking capabilities that only exist within the transaction scope, and which we will refer to as transaction locking.

Session locking in systems with a user interface is typically used to solve the well known "intervening update" problem; this is the situation where the user reads a record, makes changes, and updates the database, but where a second user has also read the same record in parallel, and has also made changes to it. In this scenario, the changes made by one of the users will be lost. This can also apply to message-based systems where multiple updates can be delivered and where sequence is important. It should be clearly understood this is locking by convention -- there are no physical locks held; the solution relies on applications understanding and agreeing to honor the locking strategy.

There are two common approaches to dealing with this problem. One is to use a pessimistic approach whereby a logical lock, understood and honored by all applications that may touch the data, is used to block access to the row (or rows) of data used in the business transaction. This data can be "locked" for a substantial time period, so should be used judiciously.

The alternative, optimistic approach is not to lock anything, but to maintain information about the data in use so that it is possible to tell whether it has been modified during the user's session (for example, by saving a timestamp or sequence number that is part of the row). If the application, at update time, sees that this value has been modified, then the update cannot take place, and the user is informed that he or she must try again.

As will be discussed later, pessimistic session locking has several problematic situations that need to be addressed, such as the cleaning up of abandoned locks.


Transaction locking options

Transaction locks (database locks) are those that occur within a single transactional unit of work. This section looks at some of the pros and cons of using optimistic and pessimistic approaches to database locking.

Pessimistic transaction locking

In the pessimistic locking scheme, explicit locks are taken against rows using the SQL SELECT FOR UPDATE statement. Data is then modified, and an SQL UPDATE is issued. In this case, it is clear that all rows have been successfully updated, as they were physically locked prior to the updates being attempted. The locks will remain in place until the transaction commits. As physical database locks are held for a longer time duration than with an optimistic locking strategy, there is more of a chance for performance and throughput being impacted. There is also more chance of deadlock situations occurring. Deadlock problems can be reduced by using the lowest possible isolation level (discussed later) by always accessing tables in the same sequence, and by minimizing lock duration by performing as much business processing as possible prior to obtaining the locks. Deadlocks can also occur due to lock escalation -- when too many locks are held at the same time, possibly causing the database manager to change from row locks to page or even table locks. Again, minimizing lock lifetimes is critical.

Certain critical updates should always be done with pessimistic locking. An example would be retrieving and caching the next available batch of keys from a primary key sequence table. This is an infrequently used operation within a short-lived transaction, and there is no reason to take the risk of the update failing.

It should be noted that there are situations where the FOR UPDATE clause is not available for a given SELECT statement. In this case, there may need to be some application restructuring, or optimistic locking may need to be used instead.

Optimistic transaction locking

Unlike the pessimistic locking scenario, where the application explicitly locks the row (or rows) that is about to be updated, no actual locks are held in this case until the rows are updated, and those locks are implicitly taken by the database manager. The application reads rows that are to be updated using a normal SQL SELECT, with no FOR UPDATE option. Data is modified, and the affected rows are re-written. The update is in some way overqualified to ensure that only rows that are in the same state as those originally read are changed. This can be done by using a timestamp or sequence number contained in the row, or by adding every column in the row to the WHERE clause of the UPDATE. The latter option is not very efficient; the first two are preferred. Additionally, some columns, such as BLOB types, are not available for use in an overqualified update. It should be noted that some OR Mapping tools (such as entity beans) will use overqualified updates.

If multiple rows are updated in a single SQL UPDATE, it is difficult to know which rows were successfully updated and which failed, so optimistic locking should only be used when dealing with single rows in the UPDATE statement, rather than sets of rows. However, you can iterate through a set of rows and update each row individually while using optimistic locking techniques.

Optimistic transaction locking is generally preferred for its performance characteristics and reduced likelihood of deadlock situations. However, this should be evaluated on a case by case basis -- there are situations where pessimistic locking is necessary. If a large number of rollbacks caused by optimistic lock exceptions are occurring, it may be time to rethink your strategy.

Even with optimistic locking, physical locks will be held on updated rows until the end of the transaction (commit time), so it is recommended to complete the transaction as soon as possible after updating the database.


Session locking options

Session locks are those that span multiple transactional units of work; they cover the entire duration of some business process. This section looks at the specifics and the pros and cons of using optimistic and pessimistic approaches to session locking.

Optimistic session locking

In this scenario, a user retrieves data from the database with no locks. Included with the data is sufficient information such that the user can tell, at some future point, if the row has been modified in the database; this could be by using a timestamp or epoch (sequence) number in the table, or merely by saving every single data element (column) as it stands at the time of initial read access. The user can view the data, make changes, and then invoke a transaction to commit those changes.

At the point of attempting to commit the update, the application will verify that no other user has modified the data. If it has, the change will be rejected and the user notified that there has been an intervening update. If there is a set of rows to be updated, some may be successful and some may fail. All decisions are made by the application, not by the database management system. This pattern is described in Patterns of Enterprise Application Architecture by Martin Fowler as optimistic offline lock (see Resources).

The updating transaction may in theory use optimistic or pessimistic transaction locking at the time of update. However, it is unlikely that there is any value in using pessimistic transaction locking to update a single row when there has been the possibility of intervening updates occurring during the entire session; using optimistic transaction locking will be adequate, as it will be based on the state of the row at the time the data was originally accessed.

If a set of rows needs to be updated, and if the application needs to know which, if any, have had intervening updates, then it would be necessary to either iterate through the rows and validate that each row is still in the required initial state. If it is essential that either all or none of the rows are updated, then reading each row with a SELECT FOR UPDATE before actually making any changes (that is, using pessimistic transaction locking for the entire set) will prevent inconsistency in the results.

Optimistic session locking is typically used when there is little likelihood of contention for the same data from multiple concurrent users. A financial planning system is a good example, where every account is owned by exactly one financial advisor, and only that advisor can see the accounts that he or she owns. When conflicts are frequent, so then are rejected updates, and users may become very frustrated with the system. One possible approach to alleviating the conflict problem is to have the ability to flag a row as open, possibly for update. This is weaker than a pessimistic session lock -- the convention adhered to by the participants is to warn, not to block. If an application sees the warning flag set, it could inform the user that this data is already in use and any changes made may be lost. Then the user could make the choice as to whether or not to continue attempting the update.

The pattern in this case is that when the first transaction reads a row it will "stamp" it with the identity of the user, similarly to how pessimistic session locking is handled (covered next). If another transaction retrieves the data and sees that the flag is set by some other user, then it will, by convention, give a warning message to the user that the data is in use. This gives the second user the choice to continue, with the knowledge that any changes that they make could be lost, or to back out.

Another use of optimistic session locking is in event driven or message-based systems where message sequence is not guaranteed. A message that is sent to replicate a database change may include the current version number or timestamp of the data. If the data at the receiving end has been modified by a change with a higher sequence number (or later timestamp), then the earlier change may be rejected.


Pessimistic session locking

In this case, a logical lock (not a physical database lock) is used to flag that a row is in use for a long-term duration. This is typically used when there is intervening user interface interaction and database locking in itself is not sufficient. There are several methods commonly used to implement this pattern, including adding additional columns to the table (such as a lock flag, lock user ID, lock timestamp, and so on), using a separate lock table, or using an external lock mechanism, such as an in-memory hashtable (which is not recommended for obvious scalability reasons). This is described in Fowler (op. cit.) as a pessimistic offline lock. Pessimistic session locks are not blocking locks (and therefore can not cause deadlocks); the second transaction attempting to lock a row will see the lock when it reads the data and will honor it, informing the user that the data is currently unavailable.

Pessimistic session locks should generally be write locks; they should let other users read the locked data, unless there is a very strong case to not do so. An example of this might be that data is currently undergoing some major reorganization, and accessing it during this time would provide invalid results.

If only a single type of lock can be associated with the given table, then it may make sense to add extra columns to the original table to control the locking. However, if there are a variety of different types of lock associated with the same table -- read and write locks, or different table groupings being locked as a coarse-grained set (discussed next) -- then it may make more sense to have a lock table associated with the primary table.

There are also several issues to be aware of with pessimistic session locks:

  • Locks may exist across user interactions, which means that they may be abandoned, and therefore a lock timeout strategy is required.

  • Session locks may need to be forcibly overridden in certain cases, such as a medical emergency in a hospital system, but such overrides should be logged for audit purposes.

  • Since pessimistic session locks can be removed as described above, the application that obtained the lock should always check to see if it is still in force before updating data. In this respect, it must be treated in some ways like an optimistic lock.

Pessimistic session locking is a form of serialization and can severely limit system availability, and so should be used with caution. It should only be used in a case when conflicts are frequent and optimistic session locking would lead to severe user contention problems, or when concurrent access to data in the middle of a business session could lead to integrity issues.


Coarse grained locking

A special case of the session locking pattern, mentioned above, is the need for a single lock on a set of tables. This is documented in Fowler (op. cit.) as a coarse-grained lock. It is feasible that there could be multiple coarse-grained locking sets on the same parent table. An example might be at the customer level in a financial system: one coarse grained lock could be to lock all accounts for the customer, another could be to lock all insurance policies. Adding extra columns to each row purely for locking purposes could become an overhead in this scenario; it may be more effective to have a lock table associated with the parent table of the coarse-grained lock set, and add rows to this table to hold session locking information.

Coarse grained locking can be either optimistic or pessimistic.


Insert locking (Namespace locking)

Another special case of locking is the need to lock something where currently no database row exists. An example of this is found in hospital scheduling, which has an extremely complex scheduling process where many resources (doctors, nurses, rooms, equipment, and so on) need to be coordinated to be available in the same time period. An optimistic approach to scheduling would enable the user to select resources, and then attempt to schedule all of them in a single transaction. If an error condition (such as a duplicate key) occurred, the transaction would have to be rolled back, the user would be given the information as to which part of the transaction failed, and then asked to re-select resources as necessary. This is also the approach taken by airline reservation systems, which address a similar problem.

If the number of conflicts or complexity of recovering from conflict is too great, a pessimistic approach can be taken, whereby the system locks the keys that would be used to create the rows in a separate distributed locking mechanism. One solution is a locking table in the database that enables the user to insert locks that, by convention, refer to other items. One problem with a database-centric approach to scheduling is that there is no capability to unlock items inside a transaction (prior to commit or rollback processing). What is ideally needed is the ability to lock a resource, verify its availability, and unlock it if no longer available. However, given that restriction, the following shows a pattern for implementing a database-centric locking approach:

User begins by asking for available resources

Begin read transaction:
	System searches for resource availability
End read transaction

System may use rules to prioritize possible options
System presents options to user
User makes a selection

Begin Update transaction:
	Validate that selected resources are still available
	Lock all resources using read for update (pessimistic locking) either on the item to
 	updated, or on a related namespace lock in a locking table for new inserts
	If any failures, rollback transaction and return to user to make a new selection
	Otherwise update all resources
End Update transaction (Commit)

Isolation levels

JDBC defines the following four isolation levels:

  • TRANSACTION_SERIALIZABLE

    Known as Repeatable Read in DB2, this is the strongest isolation level. All the rows in the database table that are affected by the current transaction will be locked; no other transaction can insert, delete, or update a row that would prevent the same query from returning exactly the same results. (Locked rows can, however, be read by other transactions). All referenced rows are locked, not only retrieved rows; the possibility of large numbers of rows being locked means that there is a strong chance of lock escalation to full table locking. In some database management systems, the optimizer may choose to take a table lock immediately based on the expectation that lock escalation is likely to occur. Phantom rows are not permitted with this isolation level, nor are uncommitted reads.

  • TRANSACTION_REPEATABLE_READ

    Known as Read Stability in DB2. Phantom rows can appear with this isolation level, because only rows that meet the predicate condition will be locked. However, these rows are guaranteed to remain stable for the duration of the unit of work; the locks are not released until Commit. Uncommitted rows created by another unit of work are not visible with this isolation level.

  • TRANSACTION_READ_COMMITTED

    Known as Cursor Stability in DB2. Rows are only locked at the point in time that a cursor is positioned on them; once the next row is fetched, the lock is released. Rows that are modified are locked until the transaction ends. Phantom rows and non-repeatable reads are possible, but uncommitted rows are not visible.

  • TRANSACTION_READ_UNCOMMITED

    Transactions are not isolated from each other, and read only cursors can access uncommitted changes from other transactions. However, updatable cursors will behave identically to TRANSACTION_READ_COMITTED.

It should be noted that locks obtained by a SELECT statement are shared locks; they do not prevent other transactions from reading data (within the constraints of the isolation level in use by the other transactions).


Recommendations

Isolation levels other than Read Committed should not usually be used other than in exceptional cases. However, for entity beans this rule has to be relaxed; due to the nature of the model, most pessimistic locking access intents require at least Repeatable Read. Additionally, it may be acceptable to use Read Uncommitted for heavily used queries on read-only tables.

Pessimistic session locking should be used with great discretion, as it serializes access to data across a business process that includes user think time.

If pessimistic session locking is used, it should generally only be used to block other users from updating; it should not block reads of the locked data.

All locks, whether logical or physical, should be maintained in the database as the only reliably available central resource. Review of locking patterns should be a mandated part of all design reviews.

System-wide locking tables are often considered to be hotspots. However, this may not be a major issue, as the lock table will typically be small and may well be kept entirely in memory by the database manager. This is best used for short-lived locks, such as namespace locks within a transaction. It can also be used for longer-lived session locks, but cleanup needs to be addressed to prevent table growth.

Session locks can also be kept on the table that is being locked (or the root table of a set of tables). If multiple lock types need to be maintained on a single table, then these can be moved to a separate child table of that parent, or to the global locking table.

Pessimistic transaction locks should conform to best practices to prevent deadlocks, including minimizing business logic while the lock is held, and accessing tables in a common (for example, alphabetic) sequence.

Optimistic transaction locking should not be used to update sets of rows in a single UPDATE statement, as error handling in this case is very difficult.


Conclusion

As we have seen, locking is a complex issue, and it is important that people are able to talk clearly about it. In order to provide a standard language for discussion, this article introduced the concepts of transaction and session locking, and discussed the different patterns that can be used, along with their pros and cons. Although it is possible to make some general recommendations, it is very difficult to set hard and fast rules, since so much depends on the specifics of the application and the type of user population. The single most important point that should be taken away from this article is that locking is is not something that you leave to the database management system to take care of for you. It is an integral part of design, and it impacts the overall usability of the system. The choice of optimistic or pessimistic locking strategies cannot be made by default; these design issues need to be surfaced and discussed to make sure that the solution meets both functional and non-functional requirements.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere, Information Management
ArticleID=106331
ArticleTitle=IBM WebSphere Developer Technical Journal: Locking strategies for database access
publish-date=03222006