Thread: Latches vs lwlock contention

Latches vs lwlock contention

From
Thomas Munro
Date:
Hi,

We usually want to release lwlocks, and definitely spinlocks, before
calling SetLatch(), to avoid putting a system call into the locked
region so that we minimise the time held.  There are a few places
where we don't do that, possibly because it's not just a simple latch
to hold a pointer to but rather a set of them that needs to be
collected from some data structure and we don't have infrastructure to
help with that.  There are also cases where we semi-reliably create
lock contention, because the backends that wake up immediately try to
acquire the very same lock.

One example is heavyweight lock wakeups.  If you run BEGIN; LOCK TABLE
t; ... and then N other sessions wait in SELECT * FROM t;, and then
you run ... COMMIT;, you'll see the first session wake all the others
while it still holds the partition lock itself.  They'll all wake up
and begin to re-acquire the same partition lock in exclusive mode,
immediately go back to sleep on *that* wait list, and then wake each
other up one at a time in a chain.  We could avoid the first
double-bounce by not setting the latches until after we've released
the partition lock.  We could avoid the rest of them by not
re-acquiring the partition lock at all, which ... if I'm reading right
... shouldn't actually be necessary in modern PostgreSQL?  Or if there
is another reason to re-acquire then maybe the comment should be
updated.

Presumably no one really does that repeatedly while there is a long
queue of non-conflicting waiters, so I'm not claiming it's a major
improvement, but it's at least a micro-optimisation.

There are some other simpler mechanical changes including synchronous
replication, SERIALIZABLE DEFERRABLE and condition variables (this one
inspired by Yura Sokolov's patches[1]).  Actually I'm not at all sure
about the CV implementation, I feel like a more ambitious change is
needed to make our CVs perform.

See attached sketch patches.  I guess the main thing that may not be
good enough is the use of a fixed sized latch buffer.  Memory
allocation in don't-throw-here environments like the guts of lock code
might be an issue, which is why it just gives up and flushes when
full; maybe it should try to allocate and fall back to flushing only
if that fails.  These sketch patches aren't proposals, just
observations in need of more study.

[1] https://postgr.es/m/1edbb61981fe1d99c3f20e3d56d6c88999f4227c.camel%40postgrespro.ru

Attachment

Re: Latches vs lwlock contention

From
Thomas Munro
Date:
On Fri, Oct 28, 2022 at 4:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> See attached sketch patches.  I guess the main thing that may not be
> good enough is the use of a fixed sized latch buffer.  Memory
> allocation in don't-throw-here environments like the guts of lock code
> might be an issue, which is why it just gives up and flushes when
> full; maybe it should try to allocate and fall back to flushing only
> if that fails.

Here's an attempt at that.  There aren't actually any cases of uses of
this stuff in critical sections here, so perhaps I shouldn't bother
with that part.  The part I'd most like some feedback on is the
heavyweight lock bits.  I'll add this to the commitfest.

Attachment

Re: Latches vs lwlock contention

From
vignesh C
Date:
On Tue, 1 Nov 2022 at 16:40, Thomas Munro <thomas.munro@gmail.com> wrote:
>
> On Fri, Oct 28, 2022 at 4:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:
> > See attached sketch patches.  I guess the main thing that may not be
> > good enough is the use of a fixed sized latch buffer.  Memory
> > allocation in don't-throw-here environments like the guts of lock code
> > might be an issue, which is why it just gives up and flushes when
> > full; maybe it should try to allocate and fall back to flushing only
> > if that fails.
>
> Here's an attempt at that.  There aren't actually any cases of uses of
> this stuff in critical sections here, so perhaps I shouldn't bother
> with that part.  The part I'd most like some feedback on is the
> heavyweight lock bits.  I'll add this to the commitfest.

The patch does not apply on top of HEAD as in [1], please post a rebased patch:

=== Applying patches on top of PostgreSQL commit ID
456fa635a909ee36f73ca84d340521bd730f265f ===
=== applying patch ./v2-0003-Use-SetLatches-for-condition-variables.patch
patching file src/backend/storage/lmgr/condition_variable.c
patching file src/backend/storage/lmgr/lwlock.c
Hunk #1 FAILED at 183.
1 out of 1 hunk FAILED -- saving rejects to file
src/backend/storage/lmgr/lwlock.c.rej
patching file src/include/storage/condition_variable.h
patching file src/include/storage/lwlock.h
Hunk #1 FAILED at 193.
1 out of 1 hunk FAILED -- saving rejects to file
src/include/storage/lwlock.h.rej

[1] - http://cfbot.cputube.org/patch_41_3998.log

Regards,
Vignesh



Re: Latches vs lwlock contention

From
Thomas Munro
Date:
On Sat, Jan 28, 2023 at 3:39 AM vignesh C <vignesh21@gmail.com> wrote:
> On Tue, 1 Nov 2022 at 16:40, Thomas Munro <thomas.munro@gmail.com> wrote:
> > Here's an attempt at that.  There aren't actually any cases of uses of
> > this stuff in critical sections here, so perhaps I shouldn't bother
> > with that part.  The part I'd most like some feedback on is the
> > heavyweight lock bits.  I'll add this to the commitfest.
>
> The patch does not apply on top of HEAD as in [1], please post a rebased patch:

Rebased.  I dropped the CV patch for now.

Attachment

Re: Latches vs lwlock contention

From
Peter Eisentraut
Date:
On 04.03.23 20:50, Thomas Munro wrote:
> Subject: [PATCH v3 1/6] Allow palloc_extended(NO_OOM) in critical sections.
> 
> Commit 4a170ee9e0e banned palloc() and similar in critical sections, because an
> allocation failure would produce a panic.  Make an exception for allocation
> with NULL on failure, for code that has a backup plan.

I suppose this assumes that out of memory is the only possible error 
condition that we are concerned about for this?

For example, we sometimes see "invalid memory alloc request size" either 
because of corrupted data or because code does things we didn't expect. 
This would then possibly panic?  Also, the realloc code paths 
potentially do more work with possibly more error conditions, and/or 
they error out right away because it's not supported by the context type.

Maybe this is all ok, but it would be good to make the assumptions more 
explicit.




Re: Latches vs lwlock contention

From
Aleksander Alekseev
Date:
Hi,

> Maybe this is all ok, but it would be good to make the assumptions more
> explicit.

Here are my two cents.

```
static void
SetLatchV(Latch **latches, int nlatches)
{
    /* Flush any other changes out to main memory just once. */
    pg_memory_barrier();

    /* Keep only latches that are not already set, and set them. */
    for (int i = 0; i < nlatches; ++i)
    {
        Latch       *latch = latches[i];

        if (!latch->is_set)
            latch->is_set = true;
        else
            latches[i] = NULL;
    }

    pg_memory_barrier();

[...]

void
SetLatches(LatchGroup *group)
{
    if (group->size > 0)
    {
        SetLatchV(group->latches, group->size);

[...]
```

I suspect this API may be error-prone without some additional
comments. The caller (which may be an extension author too) may rely
on the implementation details of SetLatches() / SetLatchV() and use
the returned group->latches[] values e.g. to figure out whether he
attempted to change the state of the given latch. Even worse, one can
mistakenly assume that the result says exactly if the caller was the
one who changed the state of the latch. This being said I see why this
particular implementation was chosen.

I added corresponding comments to SetLatchV() and SetLatches(). Also
the patchset needed a rebase. PFA v4.

It passes `make installcheck-world` on 3 machines of mine: MacOS x64,
Linux x64 and Linux RISC-V.


--
Best regards,
Aleksander Alekseev

Attachment

Re: Latches vs lwlock contention

From
Heikki Linnakangas
Date:
On 28/10/2022 06:56, Thomas Munro wrote:
> One example is heavyweight lock wakeups.  If you run BEGIN; LOCK TABLE
> t; ... and then N other sessions wait in SELECT * FROM t;, and then
> you run ... COMMIT;, you'll see the first session wake all the others
> while it still holds the partition lock itself.  They'll all wake up
> and begin to re-acquire the same partition lock in exclusive mode,
> immediately go back to sleep on*that*  wait list, and then wake each
> other up one at a time in a chain.  We could avoid the first
> double-bounce by not setting the latches until after we've released
> the partition lock.  We could avoid the rest of them by not
> re-acquiring the partition lock at all, which ... if I'm reading right
> ... shouldn't actually be necessary in modern PostgreSQL?  Or if there
> is another reason to re-acquire then maybe the comment should be
> updated.

ISTM that the change to not re-aqcuire the lock in ProcSleep is 
independent from the other changes. Let's split that off to a separate 
patch.

I agree it should be safe. Acquiring a lock just to hold off interrupts 
is overkill anwyway, HOLD_INTERRUPTS() would be enough. 
LockErrorCleanup() uses HOLD_INTERRUPTS() already.

There are no CHECK_FOR_INTERRUPTS() in GrantAwaitedLock(), so cancel/die 
interrupts can't happen here. But could we add HOLD_INTERRUPTS(), just 
pro forma, to document the assumption? It's a little awkward: you really 
should hold interrupts until the caller has done "awaitedLock = NULL;". 
So it's not quite enough to add a pair of HOLD_ and RESUME_INTERRUPTS() 
at the end of ProcSleep(). You'd need to do the HOLD_INTERRUPTS() in 
ProcSleep() and require the caller to do RESUME_INTERRUPTS(). In a 
sense, ProcSleep downgrades the lock on the partition to just holding 
off interrupts.

Overall +1 on this change to not re-acquire the partition lock.

-- 
Heikki Linnakangas
Neon (https://neon.tech)