I am a little bit confused on the use of Buffer Descriptors and the relation between maxBufferDescs, pagepool, and maxFilesToCache.
So far I understand one needs one Buffer Descriptor for each data block cached in the pagepool (all of course local on that node).
That indicates, the maximal number of buffer descriptors amounts to (pagepool_size)/(block_size) or, rather, (pagepool_size)/(max_block_size).
One also finds the recommendation that one should consider to have about 10 buffer descriptors per cached file which would be correct had the cached files 10 data blocks on average (or at least 10 blocks needed to be cached per file). It is clear that, depending on the size of typically accessed files and the block size, that "default" of 10 should be amended.
However, the documentation for the parameter maxBufferDescs states its maximum is (pagepool_size)/(16kiB). This is clearly a larger number than those found above, in very most cases it is much larger.
So, where does the statement maxBufferDescs<= (pagepool_size)/(16kiB) come from? Under which circumstances does it make sense?
Is it so that Scale would also cache individual subblocks, not only full blocks, in the pagepool? Would it then not rather have to be maxBufferDescs<= (pagepool_size)/(smallest possible subblock_size) ?
I suppose a buffer descriptor itself does not eat much memory, but how much would it be, and in which memory pool?
I suppose also others have come across this. The parameter itself is for sure not very critical (it restricts the amount of data to be cached if too small, but does probably do no harm if set too large (as long as not very much too large, in case memory for descriptors is allocated at daemon start, else it wouldn't matter anyway).