- Q.I have some questions about VP private cache, maybe you could help me..
1. Where is private cache allocated from? in the resident or virtual segment?
2. How is the VP private cache used? From my understanding:
When a VP needs to read some data pages from disk, it will search the VP cache first to check whether there is enough space, if it cannot find the space, it malloc the memory from the buffers.
So I feel now Informix has two level's of cache, the first is vp private cache and the second is buffers. But when one vp needs to access a data page which lies in another vp's private cache, what does it do?
3. After the data page was read into the vp private cache, it maybe updated or deleted, when is the data page is being flushed to disk to mark it clean and accommodate the new data page? in checkpoint?
A.The vp private memory cache is not used for buffers in the buffer pool, or for any pages read from disk. I think that's where the confusion is starting. When a thread needs to allocate memory from its own session pool, for example, that's when this VP-private cache comes in. Think of all the memory that threads allocate from pools like the 'global' pool or the 'rsam' pool or their own session pool (e.g. pool name '125'). It's that memory that goes into the VP-private cache when it's freed.
Here's the big picture. Before we had this VP-private cache feature, every VP would fight every other VP for the same memory in a particular shared memory segment. The memory in that segment had to be protected by a latch. So when Thread 1 on VP 1 needed a block of memory from Segment 1, it first acquired the latch, then took the memory, then released the latch. Meanwhile if Thread 2 on VP 2 needed memory from the same segment it would have to wait for the latch to be released in order to get a block of memory from Segment 1. Typically these threads need these blocks of memory for their session-private pools. Again, this is not related to the buffer pool that contains pages from disk.
You can imagine that in a high-stress environment with a lot of VPs and a lot of threads the latch on Segment 1 would become a performance bottleneck.
The solution we chose was to allow each VP to build its own private cache of memory blocks as blocks were freed. In other words, the first time a memory block was allocated from a segment, it would be allocated the same way it always has been. But if that memory block was freed, where ordinarily it would go back to the segment, now it remains allocated but is tracked by the freeing VP as part of its private cache. The next time a thread on that same VP needs a memory block it does not need to acquire any latch to get it. It simply takes the block from that VP's private cache. We know that no other thread will try to allocate memory from that same cache simultaneously, because only one thread can run on a VP at a time.
The size of an individual VP private cache is limited by the VP_MEMORY_CACHE_KB configuration parameter. In other words if you set VP_MEMORY_CACHE_KB to 1000, no VP-private memory cache in the server can exceed 1000 KB (1 mb) in size. Calculating the maximum amount of memory that an instance can allocate toward all VP-private caches is a matter of multiplying the value of VP_MEMORY_CACHE_KB times the number of VPs.
If you set VP_MEMORY_CACHE_KB to 0, the feature is turned off.
The minimum non-zero value for VP_MEMORY_CACHE_KB is 800, I believe.