On 10/13/2014 06:26 PM, Andres Freund wrote:
> On 2014-10-13 17:56:10 +0300, Heikki Linnakangas wrote:
>> So the gist of the problem is that LWLockRelease doesn't wake up
>> LW_WAIT_UNTIL_FREE waiters, when releaseOK == false. It should, because a
>> LW_WAIT_UNTIL FREE waiter is now free to run if the variable has changed in
>> value, and it won't steal the lock from the other backend that's waiting to
>> get the lock in exclusive mode, anyway.
>
> I'm not a big fan of that change. Right now we don't iterate the waiters
> if releaseOK isn't set. Which is good for the normal lwlock code because
> it avoids pointer indirections (of stuff likely residing on another
> cpu).
I can't get excited about that. It's pretty rare for releaseOK to be
false, and when it's true, you iterate the waiters anyway.
> Wouldn't it be more sensible to reset releaseOK in *UpdateVar()? I
> might just miss something here.
That's not enough. There's no LWLockUpdateVar involved in the example I
gave. And LWLockUpdateVar() already wakes up all LW_WAIT_UNTIL_FREE
waiters, regardless of releaseOK.
Hmm, we could set releaseOK in LWLockWaitForVar(), though, when it
(re-)queues the backend. That would be less invasive, for sure
(attached). I like this better.
BTW, attached is a little test program I wrote to reproduce this more
easily. It exercises the LWLock* calls directly. To run, make and
install, and do "CREATE EXTENSION lwlocktest". Then launch three
backends concurrently that run "select lwlocktest(1)", "select
lwlocktest(2)" and "select lwlocktest(3)". It will deadlock within seconds.
- Heikki