On 2015-06-30 21:08:53 +0300, Heikki Linnakangas wrote:
> > /*
> > * XXX: We can significantly optimize this on platforms with 64bit
> > * atomics.
> > */
> > value = *valptr;
> > if (value != oldval)
> > {
> > result = false;
> > mustwait = false;
> > *newval = value;
> > }
> > else
> > mustwait = true;
> > SpinLockRelease(&lock->mutex);
> > }
> > else
> > mustwait = false;
> >
> > if (!mustwait)
> > break; /* the lock was free or value didn't match */
> >
> > /*
> > * Add myself to wait queue. Note that this is racy, somebody else
> > * could wakeup before we're finished queuing. NB: We're using nearly
> > * the same twice-in-a-row lock acquisition protocol as
> > * LWLockAcquire(). Check its comments for details.
> > */
> > LWLockQueueSelf(lock, LW_WAIT_UNTIL_FREE, false);
>
> After the spinlock is released above, but before the LWLockQueueSelf() call,
> it's possible that another backend comes in, acquires the lock, changes the
> variable's value, and releases the lock again. In 9.4, the spinlock was not
> released until the process was queued.
Hm. Right. A recheck of the value after the queuing should be sufficient
to fix? That's how we deal with the exact same scenarios for the normal
lock state, so that shouldn't be very hard to add.
> Looking at LWLockAcquireWithVar(), it's also no longer holding the spinlock
> while it updates the Var. That's not cool either :-(.
Hm. I'd somehow assumed holding the lwlock ought to be sufficient, but
it really isn't. This var stuff isn't fitting all that well into the
lwlock code.
Greetings,
Andres Freund