Taming LOBs in a transaction processing environment: A case study

When tuning a database there are many different aspects to consider. Database administrator tasks run the gamut from simple indexing to physical disk design. When large objects (LOBs) are introduced, tuning can become rather complex. This article discusses the unusual nature of LOBs, methods to improve LOB performance with logging techniques, file system caching, and using INLINE LOBs.


Michael R. Krafick (Michael.Krafick@gmail.com), Senior Database Administrator, Independent consultant

Michael KrafickMichael Krafick is a senior database administrator in Atlanta, GA with an IBM Certification in DB2 LUW v9.7 Database Administration. He has 14 years experience in both highly transactional databases and large multi-partitioned data warehouses. Mr. Krafick is often a contributing author for db2commerce.com and has spoken at multiple IDUG technical conferences (earning the title of a top ten user speaker). He was also featured as a special guest on the DB2Night Show to speak about how he improved LOB performance by implementing INLINE LOBs.

28 March 2014

Tuning for large objects

There are multiple things to consider when tuning a database. A database administrator does a wide range of tasks, from simple indexing to designing physical disks. When large objects (LOBs) are introduced to the scene, tuning can become complex. Best practices, the IBM Information Center, and other resources all agree that you must take special care when working with LOBs.

The term large object (LOB) refers to DB2® objects that you can use to store large amounts of data. A LOB is a varying-length character string that can contain up to 2 GB - 1 of data. There are three LOB data types: CLOB, DBCLOB, or BLOB.

Why is tuning for a LOB so unusual? The answer is in how DB2® accesses LOB data. Typically, data in rows and columns is relatively small and can be cached in memory (buffer pools) to speed up access. Large text documents, graphics, video, or audio can take many megabytes or gigabytes to store, so DB2 does not store them in its buffer pools. Data access requires DB2 to read data directly from disk, which is much slower. (LONG VARCHAR data types also behave in this manner, though this data type has been deprecated as of DB2 V9.7.) This was the cause of application slowdown and resource strain in my own environment.

In this article, learn about the system effects of LOBs within our transaction processing environment and the steps we took to improve system performance. We'll explore the unusual nature of LOBs, methods to improve LOB performance with logging, file system cashing, and using INLINE LOBs.

Hardware and software configuration

First, a summary of our hardware and software configuration. The server environment configuration consisted of:

  • P6 570 Server, 8 logical CPUs (4 Physical), 32 GB memory, 1.2 TB over 4 HDISKS
  • AIX 6.0, DB2 9.7
  • Database size 50 GB

The database environment configuration included:

  • Isolated active log directory
  • Data directories over 4 HDISKS
  • 7300 connections; 4500 average connections in use at one time
  • 14 application servers
  • 350,000 transactions per hour at peak time
  • 2.8 million transactions per day

The database was a highly transactional backbone for client facing software installed in over 1600 locations. Sized for 10,500 individual users, database performance was suffering when 6,000 users were engaged at one time.

Identifying pain: baseline metrics

Listing 1 shows an excerpt of a tablespace that illustrates the effect that LOBs had on disk utilization within our environment. The environment was an online transaction processing (OLTP) database where the BLOB is placed in a small but critical table. Direct reads and direct writes show DB2 accessing LOBs directly from disk.

Listing 1. Database metrics (baseline)
  Direct reads                              =	233754
  Direct writes                             =	174318 (145 writes/second)
  Direct write elapsed time (ms)            =	24593
  Update/Insert/Delete statements executed  =	93176
  Log pages written                         =	88708
  Number write log IOs                      =	78827

The volume of direct writes doesn’t mean anything without more data. However, one could extrapolate that 145 direct writes per second may be on the higher end of the scale.

When this database was under mild to medium pressure, disk utilization for tablespaces hovered at 30-40% while the disks holding the active database logs were 50-60% busy. As user adoption picked up, disk utilization increased to between 60% and 80% and the active database log file systems were over 90% busy. Although system resources were not maxed out, the database response time was not fast enough for the application.

To complicate matters, a higher user adoption rate and seasonal changes in activity caused the database to become so I/O bound that the application would come to a halt.

Three ways to improve LOB performance

We attacked the overall performance problem in three ways by:

  • Addressing how LOBs are logged.
  • Addressing file system level caching.
  • Using a new feature in DB2 v9.7 called INLINE LENGTH.

Not logged

The first question to ask is "Do I need to be recoverable?" Most database administrators would scream YES! However, in this specific case the critical LOB table was transient: it was truncated nightly. This unique characteristic allowed us some freedom to influence how DB2 handled logging with the NOT LOGGED clause. Listing 2 shows an example.

Listing 2. Create table statement using NOT LOGGED
      MIDDLINIT   CHAR(1),
      DEPT        CHAR(3),

When including the NOT LOGGED option, DB2 will not write the LOB data to the transaction logs. This change on a single heavily-used table containing BLOBs gave an immediate 10% improvement in active log utilization. Using the NOT LOGGED option does affect the recoverability of the table. If you use ROLLFORWARD during recovery, the BLOB will be replaced by binary zeroes.

If your transaction is lost mid-flight, it is still recoverable. DB2 uses a technique called shadowing whereby the current storage pages are not over-written. Old, unmodified pages are kept as shadow copies until they are no longer needed for rollback.

IBM’s rule of thumb is to use this feature on LOBs over 10MB when possible. LOBs over 1GB cannot be logged.

File system caching

File system caching is where bits of information are read from physical disk into system memory to improve performance. This is normally a good thing. However, DB2 has its own cache, known as a buffer pool, so it wastes memory and can actually decrease performance to use both kinds of cache. DB2 provides the ability to disable file system caching.

Figure 1 shows an example of how data will flow back and forth causing the duplicate effort.

Figure 1. Duplicate effort: buffer pools and file system caching
A diagram showing duplication of effort between Bufferpools and Filesystem Caching

A streamlined effort, with proper use of buffer pools, is shown in Figure 2.

Figure 2. Streamlined effort
A diagram showing duplication of effort between Bufferpools and Filesystem Caching

Alter your tablespace to NO FILESYSTEM CACHING if your tablespaces exclusively contain tables that do not include LOBs. If your tablespace does contain tables that use LOBs, turn on file system caching with the ALTER TABLESPACE statement, as in Listing 3.

Listing 3. Alter tablespace using FILESYSTEM CACHING

File system caching is enabled or disabled at a tablespace level, so be cognizant of where tables are allocated within tablespaces. This may involve a tuning effort to separate LOB and NON-LOB tables into isolated tablespaces.

Why does this help performance?

Data that can be stored in rows is pulled into memory via buffer pools. LOBs, by design, cannot be pulled into buffer pools. For argument’s sake, let’s say a database is the back end for an application where all tables are lumped into one tablespace (USERSPACE1) using file system caching.

The LOB tables in USERSPACE1 will get a needed lift because they cannot be pulled into the buffer pool and will use file system caching. But, regular data in rows is passing through the file system cache and through DB2 buffer pools causing two memory steps instead of one. This double effort is not efficient and can cause slower response times. In our case study environment, we had over 400 tables. Only 15 contained LOBs, one of which was a critical table with extremely high access rates. Separating these tables into proper tablespaces that allowed proper cache management gave huge lift.

Using INLINE Length

There is logic regarding why a LOB bypasses memory and goes straight to disk for data. For example, a music clip, photograph, or video is often much larger than the maximum size of a row (32k). In short, it often just won’t fit on data pages.

But what if it will?

Not all LOBs are huge video files. LOBs could be a small employee badge photo, a scan of a single signature, or a variable field of characters. Smaller LOBs are good candidates for a feature released in DB2 v9.7 called INLINE LENGTH. INLINE LENGTH tells DB2 that if the object is small enough to fit in a row, then put it in the row. If it is too big, then access it from disk as normal.

If the bulk of your LOBs can be placed inline, you gain the benefits of having LOB data within the physical structure of the table. The LOB data can now be put in buffer pools, avoiding slower disk I/O every time a LOB is referenced. This can produce a dramatic drop in physical I/O and a huge increase in speed.

Implementing INLINE LENGTH requires a few steps, as follows.

  1. ADMIN_EST_INLINE_LENGTH (finding estimated length)

    DB2 provides the ADMIN_EST_INLINE_LENGTH function to analyze and estimate the size needed to bring the LOB within a row. A second useful function, ADMIN_IS_INLINED, determines if a particular LOB has been moved inline already. It can also be used at the end of the process to confirm your changes have taken effect. Listing 4 shows an example.

     ADMIN_IS_INLINED (column)
            0 Not Inlined; 
            1 Inlined; 
            Null Inputs are Null
            (number): Est. row size to be inline
            NULL: Inputs are NULL
            -1: Cannot be inlined due to length
            -2: Inserted before 9.7, can’t be evaluated

    Listing 5 shows how to estimate INLINE length.

    Listing 5. Example SQL to estimate INLINE length
     [Where POLICY is a unique column and SIGNATURE_PDF is the associated LOB]
     ----------- ---------- -----------------
     1A678Z4              1              1406
     1A984C9              0              1406
     1A581D2              0              1208
     1C789F8              0                -1
     1W190D3              0              1101
     1Q303J9              0                -2

    You can calculate optimal in-row size based off the largest returned value for your LOB or a value plus cushion (such as 10%). Keep in mind, just because an INLINE LENGTH is -2 doesn’t mean the LOB can’t be brought in row—the LOB just can’t be assessed with this function.

    Not all rows may fit inline. There is still a benefit if most of them do. Be cautious, as this might affect file system caching settings. When contemplating which value to use, remember the value chosen will apply to all LOBs in the table. The value needs to be as high as possible, without wasting space.


    After you have a value that works for your specific LOB, add 4 bytes for inline overhead. To simplify the example, let’s assume our table has only the six policies shown in Listing 5. The INLINE_LENGTH value of 1406 is the largest value. Add 4 for overhead (1410) and you have a good estimated value for your inline length. Figure 3 shows an example.

    After the size is established, calculate the total row size and confirm that the estimate is within the maximum row size for the selected tablespace page size. This could mean adjusting tablespaces and relocating tables as necessary.

    Figure 3. Page size to row size maximum
    A chart showing rowsize limits.

    You can create whole new tables using the inline length feature or you can issue a simple ALTER statement, as in Listing 6.

      ALTER TABLE (tablename) ALTER COLUMN (colname) SET INLINE LENGTH (N) 
      [Where N is max length from ADMIN_EST_INLINE_LENGTH evaluation]
  3. REORG

    If you are altering an existing table, LOBs are not stored INLINE until the specific column is updated or an offline REORG using the LONGLOB option is used. Take care to only issue the LONGLOB flag during the conversion process and not in all future REORGs. According to IBM documentation, there will be no real benefit of running LONGLOB in future REORGs unless XML data types are used. In that case, some space may be reclaimed. Otherwise, the LONGLOB flag will actually slow down the REORG command.

Considerations after setting INLINE length

After you've set INLINE length, you might need to address the following considerations.

Not all LOBs will fit inline
Some LOBs will be too large and will not fit in the predefined space. Even if 30% of LOBs still go directly to disk after conversion, the 70% of your LOBs that now fit inline will reduce load and transaction time to disk. Getting a performance improvement, even if 30% of transactions still go directly to disk, is still lift.
How does this affect file system caching?
Does it make sense to turn off file system caching if only a percentage of LOBs are inline? In our scenario, certain targeted LOB tables were small enough that there was an effort to pull the whole table into buffer pool once LOBs were inline to increase performance. This meant turning off file system caching after LOBs were converted. I have not found a specific rule of thumb, but in this case study we decided to turn off file system caching when 70% of our LOBs were inline.

Another alternative is to use the CREATE TABLE … LONG IN option. It gives flexibility to use the current tablespace for data values that will be inline but place LONG data (such as LOBs) in a separate tablespace. Note that the LONG IN tablespace must be DMS. Regardless, if the original tablespace is used or a secondary LOB tablespace is created, activate file system caching appropriately.

Watch your CPU
The old adage "robbing Peter to pay Paul" applies to inline length. Reducing I/O to disk can increase CPU utilization because the CPUs will no longer be waiting for I/O to occur. Effort is not eliminated, but shifted. Looking at metrics such as read efficiency and reducing the need to scan table or indexes in memory will reduce pressure on the resource.

Performance improvement results

After implementing NOT LOGGED on a single critical table, separating LOB tablespaces, properly tuning FILESYSTEM CACHING, and inlining a high percentage of LOBs, we attained the results in Listing 7.

Listing 7. Database metrics (post tuning)
                                   Before               After
Direct reads                       = 233754             73471   68% Improvement
Direct writes                      = 174318             14008   91% Improvement
Direct write requests              = 16213                262   98% Improvement
Direct write elapsed time (ms)     = 24593               1622   93% Improvement
Log pages written                  = 88708              53642       Overall work drops
Log write time (sec.ns)            = 89                    76       Overall work drops
Number write log IOs               = 78827              48354       Overall work drops

Efficiency improvements

We gained the following efficiencies.

  • Total transactions processed increased by 9%
  • 20% of total transaction time improved by 15-16%
  • Less than 1% (very few) of transactions slowed down slightly

Performance improvements

Figure 4 provides details about the performance improvements.

Figure 4. Performance improvements
A diagram showing duplication of effort between Bufferpools and Filesystem Caching

Lessons learned

In the scenario with our transaction processing environment, it was not one but many specific tuning efforts that gave overall lift. Each of the implemented phases yielded enough improvement on its own to provide borrowed time until all three phases could be implemented.




  • Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.


developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Information management on developerWorks

Zone=Information Management
ArticleTitle=Taming LOBs in a transaction processing environment: A case study