Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock in WaitXLogInsertionsToFinish() - Mailing list pgsql-hackers
From | Andres Freund |
---|---|
Subject | Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock in WaitXLogInsertionsToFinish() |
Date | |
Msg-id | 20221202004042.qwjadmkpq4zky3tn@awork3.anarazel.de Whole thread Raw |
In response to | Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock in WaitXLogInsertionsToFinish() (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>) |
Responses |
WAL Insertion Lock Improvements (was: Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock in WaitXLogInsertionsToFinish())
|
List | pgsql-hackers |
On 2022-11-25 16:54:19 +0530, Bharath Rupireddy wrote: > On Fri, Nov 25, 2022 at 12:16 AM Andres Freund <andres@anarazel.de> wrote: > > I think we could improve this code more significantly by avoiding the call to > > LWLockWaitForVar() for all locks that aren't acquired or don't have a > > conflicting insertingAt, that'd require just a bit more work to handle systems > > without tear-free 64bit writes/reads. > > > > The easiest way would probably be to just make insertingAt a 64bit atomic, > > that transparently does the required work to make even non-atomic read/writes > > tear free. Then we could trivially avoid the spinlock in > > LWLockConflictsWithVar(), LWLockReleaseClearVar() and with just a bit more > > work add a fastpath to LWLockUpdateVar(). We don't need to acquire the wait > > list lock if there aren't any waiters. > > > > I'd bet that start to have visible effects in a workload with many small > > records. > > Thanks Andres! I quickly came up with the attached patch. I also ran > an insert test [1], below are the results. I also attached the results > graph. The cirrus-ci is happy with the patch - > https://github.com/BRupireddy/postgres/tree/wal_insertion_lock_improvements_v1_2. > Please let me know if the direction of the patch seems right. > clients HEAD PATCHED > 1 1354 1499 > 2 1451 1464 > 4 3069 3073 > 8 5712 5797 > 16 11331 11157 > 32 22020 22074 > 64 41742 42213 > 128 71300 76638 > 256 103652 118944 > 512 111250 161582 > 768 99544 161987 > 1024 96743 164161 > 2048 72711 156686 > 4096 54158 135713 Nice. > From 293e789f9c1a63748147acd613c556961f1dc5c4 Mon Sep 17 00:00:00 2001 > From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> > Date: Fri, 25 Nov 2022 10:53:56 +0000 > Subject: [PATCH v1] WAL Insertion Lock Improvements > > --- > src/backend/access/transam/xlog.c | 8 +++-- > src/backend/storage/lmgr/lwlock.c | 56 +++++++++++++++++-------------- > src/include/storage/lwlock.h | 7 ++-- > 3 files changed, 41 insertions(+), 30 deletions(-) > > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c > index a31fbbff78..b3f758abb3 100644 > --- a/src/backend/access/transam/xlog.c > +++ b/src/backend/access/transam/xlog.c > @@ -376,7 +376,7 @@ typedef struct XLogwrtResult > typedef struct > { > LWLock lock; > - XLogRecPtr insertingAt; > + pg_atomic_uint64 insertingAt; > XLogRecPtr lastImportantAt; > } WALInsertLock; > > @@ -1482,6 +1482,10 @@ WaitXLogInsertionsToFinish(XLogRecPtr upto) > { > XLogRecPtr insertingat = InvalidXLogRecPtr; > > + /* Quickly check and continue if no one holds the lock. */ > + if (!IsLWLockHeld(&WALInsertLocks[i].l.lock)) > + continue; I'm not sure this is quite right - don't we need a memory barrier. But I don't see a reason to not just leave this code as-is. I think this should be optimized entirely in lwlock.c I'd probably split the change to an atomic from other changes either way. Greetings, Andres Freund
pgsql-hackers by date: