Re: Adding locks statistics - Mailing list pgsql-hackers

From Bertrand Drouvot
Subject Re: Adding locks statistics
Date
Msg-id actze31XhGYXNNQg@ip-10-97-1-34.eu-west-3.compute.internal
Whole thread Raw
In response to Re: Adding locks statistics  (Tomas Vondra <tomas@vondra.me>)
Responses Re: Adding locks statistics
List pgsql-hackers
Hi,

On Mon, Mar 30, 2026 at 06:11:17PM +0200, Tomas Vondra wrote:
> Hi,
> 
> Isn't pgstat_lock_flush_cb a bit broken with nowait=true? It'll skip
> flushing stats for that particular lock type, but then it'll happily
> reset the pending stats anyway, forgetting the stats.
> 
> AFAIK it should keep the pending stats, and flush them sometime lager,
> when the lock is not contended. That's what the other flush callbacks
> do, at least. This probably means it needs to reset the entries one by
> one, not the whole struct at once.

Oh right, it's currently misbehaving, thanks for the warning!

> TBH I'm rather skeptical about having one lock per entry. Sure, it
> allows two backends to write different entries concurrently. But is it
> actually worth it? With nowait=true it might even be cheaper to try with
> a single lock, and AFAICS that's the case where it matters.
> 
> I wouldn't be surprised if this behaved quite poorly with contended
> cases, because the backends will be accessing the locks in exactly the
> same order and synchronize. So if one lock is contended, won't that
> "synchronize" the backends, making the other locks contended too?
> 
> Has anyone tested it actually improves the behavior? I only quickly
> skimmed the thread, I might have missed it ...
> 

I just did a few tests, with a per entry lock version fixed (to avoid the bug
mentioned above) and with a single lock.

The test is this one:

Setup:

deadlock_timeout set to 1ms.

CREATE TABLE t1(id int primary key, v int);
CREATE TABLE t2(id int primary key, v int);
INSERT INTO t1 SELECT i, 0 FROM generate_series(1,100) i;
INSERT INTO t2 SELECT i, 0 FROM generate_series(1,100) i;

test.sql:

\set id1 random(1, 100)
\set id2 random(1, 100)
BEGIN;
SELECT pg_advisory_xact_lock(:id1);
UPDATE t1 SET v=v+1 WHERE id=:id1;
UPDATE t2 SET v=v+1 WHERE id=:id2;
END;

Launched that way:

pgbench -c 32 -j 32 -T60 -f test.sql

One run produces, something like:

postgres=# select locktype, waits, wait_time from pg_stat_lock where waits > 0;
   locktype    | waits  | wait_time
---------------+--------+-----------
 tuple         |   5058 |      5092
 transactionid |  78287 |     79269
 advisory      | 105005 |    177253
(3 rows)

With one lock per entry, the avg (the test has been run 5 times) tps is 12099.
With one single lock, the avg (the test has been run 5 times) tps is 12118.

The difference looks like noise so that one lock per entry does not show
improved performance.

Also both kind of tests produce this perf profile:

0.00%     0.00%  postgres      postgres              [.] pgstat_lock_flush_cb

So, I'm tempted to say that one lock per entry adds complexity without observable
performance benefit. Also one single lock matches more naturally the intent of the
nowait path and I would also not be surprised if one lock per entry behaves
worse in some cases.

So, PFA a patch to move to a single lock instead.

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

Attachment

pgsql-hackers by date:

Previous
From: Masahiko Sawada
Date:
Subject: Re: POC: Parallel processing of indexes in autovacuum
Next
From: Jelte Fennema-Nio
Date:
Subject: Re: Add GoAway protocol message for graceful but fast server shutdown/switchover