On Mon, Jun 4, 2012 at 10:42 AM, Ants Aasma <ants@cybertec.at> wrote:
> On Mon, Jun 4, 2012 at 6:38 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>> On Mon, Jun 4, 2012 at 10:17 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
>>> What happens (in the very unlikely, but possible case?) if another
>>> backend races to the buffer you've pointed at with 'victim'? It looks
>>> like multiple backends share the clock sweep now, but don't you need
>>> to need an extra test to ensure it's still a candidate victim buffer?
>>
>> Actually, I don't think you do: the existing check on refcount is
>> probably good enough. Hm, why did you get rid of
>> BufferStrategyControl.lastFreeBuffer?
>
> It was dead code as far as I could tell. That change isn't actually
> relevant for this patch because free-list management is still
> protected by a lock (except the initial unlocked test that is
> doublechecked under lock) and so doesn't need any adjustment.
I have to admit -- this is pretty cool. The changes look pretty clean
but I think it's going to need some benchmarking to prove it's
actually faster under all workloads and some more review though. If
there's heavy contention on getting victim buffers you're going to be
making a lot of atomic operations -- each one of which will lock down
the cache line. A yielding lwlock is obviously going to have very
different behaviors under contention, so there needs to be a
performance test that really burns up the freelist.
If it's Well And Truly faster, you have to immediately start wondering
if the freelist lock can't be completely scrapped: buffer invalidation
only occurs in fairly rare events -- but when it does ISTM to be
fairly counter productive. In trying to avoid the sweep you end up
pushing everyone towards a locky free list. I don't know about you,
but I'd prefer to have more consistent behavior (as in, not "drop big
table, concurrency drops 50%").
Noted, this doesn't really address the OP's problem :-). I bet it'd
supplement the nailing strategy nicely however.
merlin