Re: WAL Insertion Lock Improvements - Mailing list pgsql-hackers
From | Bharath Rupireddy |
---|---|
Subject | Re: WAL Insertion Lock Improvements |
Date | |
Msg-id | CALj2ACWgeOPEKVY-TEPvno=Vatyzrb-vEEP8hN7QqrQ=yPRupA@mail.gmail.com Whole thread Raw |
In response to | Re: WAL Insertion Lock Improvements (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>) |
Responses |
Re: WAL Insertion Lock Improvements
|
List | pgsql-hackers |
On Mon, May 8, 2023 at 5:57 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote: > > On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier <michael@paquier.xyz> wrote: > > > -LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val) > > +LWLockUpdateVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 val) > > [...] > > Assert(pg_atomic_read_u32(&lock->state) & LW_VAL_EXCLUSIVE); > > > > - /* Update the lock's value */ > > - *valptr = val; > > > > The sensitive change is in LWLockUpdateVar(). I am not completely > > sure to understand this removal, though. Does that influence the case > > where there are waiters? > > I'll send about this in a follow-up email to not overload this > response with too much data. It helps the case when there are no waiters. IOW, it updates value without waitlist lock when there are no waiters, so no extra waitlist lock acquisition/release just to update the value. In turn, it helps the other backend wanting to flush the WAL looking for the new updated value of insertingAt in WaitXLogInsertionsToFinish(), now the flushing backend can get the new value faster. > > Another thing I was wondering about: how much does the fast-path used > > in LWLockUpdateVar() influence the performance numbers? Am I right to > > guess that it counts for most of the gain seen? > > I'll send about this in a follow-up email to not overload this > response with too much data. The fastpath exit in LWLockUpdateVar() doesn't seem to influence the results much, see below results. However, it avoids waitlist lock acquisition when there are no waiters. test-case 1: -T5, WAL ~16 bytes clients HEAD PATCHED with fastpath PATCHED no fast path 1 1482 1486 1457 2 1617 1620 1569 4 3174 3233 3031 8 6136 6365 5725 16 12566 12269 11685 32 24284 23621 23177 64 50135 45528 46653 128 94903 89791 89103 256 82289 152915 152835 512 62498 138838 142084 768 57083 125074 126768 1024 51308 113593 115930 2048 41084 88764 85110 4096 19939 42257 43917 > > Or could it be that > > the removal of the spin lock in > > LWLockConflictsWithVar()/LWLockWaitForVar() the point that has the > > highest effect? > > I'll send about this in a follow-up email to not overload this > response with too much data. Out of 3 functions that got rid of waitlist lock LWLockConflictsWithVar/LWLockWaitForVar, LWLockUpdateVar, LWLockReleaseClearVar, perf reports tell that the biggest gain (for the use-cases that I've tried) is for LWLockConflictsWithVar/LWLockWaitForVar: test-case 1: -T5, WAL ~16 bytes HEAD: + 61.89% 0.05% postgres [.] LWLockWaitForVar + 43.19% 0.12% postgres [.] LWLockConflictsWithVar + 1.62% 0.00% postgres [.] LWLockReleaseClearVar PATCHED: + 38.79% 0.11% postgres [.] LWLockWaitForVar 0.40% 0.02% postgres [.] LWLockConflictsWithVar + 2.80% 0.00% postgres [.] LWLockReleaseClearVar test-case 6: -T5, WAL 4096 bytes HEAD: + 29.66% 0.07% postgres [.] LWLockWaitForVar + 20.94% 0.08% postgres [.] LWLockConflictsWithVar 0.19% 0.03% postgres [.] LWLockUpdateVar PATCHED: + 3.95% 0.08% postgres [.] LWLockWaitForVar 0.19% 0.03% postgres [.] LWLockConflictsWithVar + 1.73% 0.00% postgres [.] LWLockReleaseClearVar -- Bharath Rupireddy PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
pgsql-hackers by date: