Maybe It’s Because...
MartinPacker 11000094DH Visits (3561)
Maybe it’s because I know nothing about DB2 buffer pools that I’ve never written about them.1
Actually, that wouldn’t be true. I would say I know a fair amount about DB2, but I’ve taken a bit of a step back from it - as we have a DB2 Performance specialist in the team. But not too much of a step back, I hope.
So, why write about DB2 buffer pools now? Well, I had an interesting conversation about them the other night.2
The actual conversation was about Buffer Pool Hit Ratio calculations - which are a little weird, to say the least.
But, having discovered I hadn’t blogged on the subject, I’ve decided to braindump on the subject.3
A Simple Buffer Pool Model
In a simple buffer pool setup, an application requests pages4 from the buffer manager. The buffer manager retrieves them from the buffer pool if they’re there. Otherwise they are retrieved from disk.5 If a page is retrieved from disk it is loaded into a buffer, potentially displacing another page. In simple buffer pool models, the buffer pool manager deploys a Least Recently Used (LRU) algorithm for discarding pages.
So a hit percentage is easy to calculate: It would be hits/(hits+misses) turned into a percentage. A hit here is where the page was retrieved from the buffer pool and a miss is where it came from disk. It’s meaningful, too.
But DB2’s Buffer Pool Model Is Anything But Simple
Here are some ways that it isn’t simple:
This is not an exhaustive list of the ways that DB2 buffer pool management is complex; I just want to give you a flavour.
Suppose you knew that the page that was asked for was the first of several neighbouring pages that would need to be retrieved from disk. Then you could more efficiently retrieve them as a block. Not just because you could do it in fewer I/Os but also because you could retrieve the second and subsequent pages while the first was being processed. Or something like that.
This is actually what DB2 will do, under certain circumstances. This technique is called Prefetch. DB2 has three types of Prefetch: Sequential, List and Detected (or Dynamic).
The first two are generally kicked off because DB2 knows before it executes the SQL that prefetching is required. The third is more intriguing. As the name suggests, it happens because DB2 detects dynamically that there is some benefit in prefetching.
In any case, more pages are read than the initial application request asked for.
Suppose DB2 read a page that the application never wanted. That would do something strange to our naive buffer pool hit ratio calculation. And the various flavours of prefetch can lead to that. There is even the phenomenon of a negative hit percentage.
But relax; These are just numbers.
By the way, just to complicate things, “neighbouring” might not mean “contiguous”. It might mean “sufficiently close together”. (Contiguous is, of course an example of that.) List Prefetch, in particular, is gaining efficiency without contiguousness.
Buffer Pool Page Replenishment
I mentioned the Least Recently Used (LRU) algorithm above. While DB2 generally does use this method of page management it needn’t and you can control this. There are two other algorithms available to you:
The LRU algorithm is computationally a little expensive - as you have to keep track of the “page ages” - that is how long it was since each page was last referenced. The other two algorithms are simpler and thus less expensive.
To be fair, most customers stick to LRU buffering, which is the default.
DB2 Buffer Pool Instrumentation
As you probably know, DB2 has instrumentation at various levels of detail. Two are of general interest:
Let me summarise what these can tell us about buffer pools - and how they behave.
Statistics Trace (SMF 100 and 102) is the main instrumentation for understanding subsystem performance themes, for example virtual storage.
Of most relevance to this post is the instrumentation in support of buffer pools. You get detailed instrumentation on each individual buffer pool. By the way a typical subsystem can have anywhere from half a dozen to dozens of buffer pools.
As well as configuration information - sizes, names and thresholds - you get counts of events. For example Synchronous Read I/Os.
If you were processing raw Statistics Trace records you might easily get confused: The counters are totals from when the subsystem was started up to the moment the record was cut. So, to get rates, you have to subtract the previous record’s counts from those in the current record. A typical calculation would be the delta between the count in the first record in an hour and the last in the same hour. There used to be a fairly severe problem with this:
Record cutting frequency is governed by the STATIME parameter. It used to default to 30 minutes. With a STATIME of 30 if you subtracted a counter in the first record from the same counter in the last record in the hour you’d underrepresent the hour’s activity by about 50%. Fortunately the default value of STATIME dropped to 5 and finally to 1 minute. So the underrepresentation is under 2%. Dropping the STATIME default from 30 to 1 does produce 30 times as many SMF records but they are cheap to produce and small6.
This might be extraneous detail but it used to be possible for counters to overflow - in fact wrap - but the counters are now 64-bit instead of 32-bit.
One stunt I used to pull was to detect if the value of a counter dropped. That would indicate a DB2 restart had occurred.7 Nowadays, of course, I use SMF 30 for the same purpose. And in fact I don’t tend to see DB2 restarts outside of IPLs - much.
Statistics Trace doesn’t contain any timings; For that read on.
Accounting Trace (SMF 101) is much more straightforward. And more detailed.
Let’s take a simple example. A CICS transaction, accessing DB2, will generally cut a single “IFCID 3” Accounting Trace record. As well as many other useful things, such as timings, the record contains Buffer Pool Accounting sections.
Each Buffer Pool Accounting section documents the activity to a single buffer pool from this transaction instance. So you can tell, for example, how many synchronous read I/Os were performed - at the buffer pool level - for this transaction.
If you wanted to analyse the buffer pool behaviour as it relates to a transaction you would:
There are a couple of things to note:
In short, Accounting Trace is very useful. Obviously, with its level of granularity, it is high volume. But it’s not terribly CPU intensive and is essential for proper DB2 performance analysis.
Establishing A Working Set
If you look at a DB2 subsystem in the first few days and even weeks of its life you’ll generally see something that looks like a memory leak. Namely, that more and more memory is used. I’ve demonstrated this many times to customers. (Apart from bugs) this isn’t the case. It’s what I would call “establishing a working set”. Here are two examples of how this happens:
The whole area of DB2 Virtual Storage is fascinating, and again well documented by Statistics Trace. However, nowadays (since Version 10) what is of greater interest is how use of Virtual drives use of Real. Hence this section of the post.
This has necessarily been a brief introduction to DB2 buffer pool management. I wanted to introduce you to a few concepts:
I’ve deliberately steered clear of discussing updates and DB2’s strategies for managing writing updates out. I’ve also not covered logging nor DB2 Datasharing - as I wanted to keep it simple.
Wow! “Braindump” is the right word: This got verbose. Well done if you got all the way through it. But I’ve covered a lot of ground at a high level. DB2 Performance is a very interesting topic - and really quite extensive. Which is one of the reasons I got into the subject in the first place. And why I like to keep my hand in still.