• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (1)

1 localhost commented Permalink

I wanted to clarify a couple of things so that this VP Private Cache feature doesn't get a bad rap. The defect Alexey discovered is serious not so much because it causes the VP Cache feature to be turned on unexpectedly, but because it also causes a gross misconfiguration of the feature.

 
A year or so ago we found that in certain CPU-intensive environments our memory management mechanisms became a performance bottleneck. Shared memory segments are broken up into 4k blocks which are allocated to and drained from memory pools as those pools expand and contract over time. The blocks in each segment are tracked by a bitmap that is protected by a latch. Every time a thread needs to add memory blocks to a pool or free blocks from the pool it must acquire at least one latch and modify the associated block bitmap. We noticed those latches getting very hot during high-end benchmarks on large SMP boxes, and developed this VP-Private Cache feature as a result.
 
When the feature is enabled, freeing a memory block places it into a cache rather than returning it to the segment. Each VP has a cache of its own, so by definition there can be no contention for blocks in this cache. When a thread on CPU VP 5 needs a block it checks the cache first. If the cache doesn't contain enough contiguous memory, the thread then goes through the normal block-allocation mechanism involving the segment latches and bitmaps.
 
The maximum amount of memory that can be allocated to a VP-private cache is set with the VP_MEMORY_CACHE_KB parameter. The default setting of 0 turns the feature off. The minimum non-zero value is 800, meaning 800 kilobytes for each CPU VP. The maximum allowed value is 40% of SHMTOTAL divided by the number of CPU VPs.
 
As far as we can tell, even in Alexey's case the feature was performing exactly as designed. VP_MEMORY_CACHE_KB was set to a huge value and the cache was gobbling up memory blocks accordingly. But he never set VP_MEMORY_CACHE_KB, and unfortunately neither did we. The internal config structure element was left uninitialized and contained garbage.
 
The cure is to explicitly set VP_MEMORY_CACHE_KB to some value. If you want to take advantage of the cache, by all means set it to an appropriate value in your config file and your IDS virtual memory requirements will increase by that amount times the number of CPU VPs. If you want to disable the feature simply set VP_MEMORY_CACHE_KB to 0.
 
When Alexey's defect is fixed, explicitly turning off the feature will no longer be necessary on UNIX. (The bug does not affect IDS on Windows.)

Add a Comment Add a Comment