On Thu, May 24, 2012 at 12:46 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
> On Thu, May 24, 2012 at 2:24 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>>> As you can see, raw performance isn't much worse with the larger data
>>> sets, but scalability at high connection counts is severely degraded
>>> once the working set no longer fits in shared_buffers.
>>
>> Hm, wouldn't the BufFreelistLock issue be ameliorated if
>> StrategyGetBuffer could reserve multiple buffers so that you'd draw
>> down your local list and only then go back to the global pool? (easier
>> said than done obviously).
>
> hm, looking at the code some more, it looks like the whole point of
> the strategy system is to do that.
I thought you were suggesting that the StrategyGetBuffer would
pre-allocate multiple buffers to a backend under the cover of a single
BufFreelistLock. If that is what you were suggesting, that is not
what the strategy system is currently for. It is for locally reusing
buffers, not for gang-allocating them.
If a backend could somehow predict that the buffer it is about to read
in is likely going to be a cold buffer, perhaps it would make sense
for each backend to maintain an small ring of its own which it can
reuse for such cold buffers.
> ISTM bulk insert type queries
> would be good candidates for a buffer strategy somehow?
Probably. There is a code or README comment to that effect that I
stumbled upon just ra couple hours ago, but can't immediately re-find
it.
Cheers,
Jeff