Re: pg_lock_status() performance - Mailing list pgsql-performance

From Tom Lane
Subject Re: pg_lock_status() performance
Date
Msg-id 19169.1240957024@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_lock_status() performance  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-performance
Merlin Moncure <mmoncure@gmail.com> writes:
>> On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> [squint...] �AFAICS the only *direct* cost component in pg_lock_status
>>> is the number of locks actually held or awaited. �If there's a
>>> noticeable component that depends on max_locks_per_transaction, it must
>>> be from hash_seq_search() iterating over empty hash buckets. �Which is
>>> a mighty tight loop. �What did you have max_connections set to?

> oops. misread that...the default 100.

Hmm ... so we are talking about 1638400 vs 6400 hash buckets ... if that
adds 4 msec to your query time then it's taking about 2.5 nsec per empty
bucket, which I guess is not out of line for three lines of C code.
So that does seem to be the issue.

We've noticed before that hash_seq_search() can be a bottleneck for
large lightly-populated hash tables.  I wonder if there's a good way
to reimplement it to avoid having to scan empty buckets?  There are
enough constraints on the hashtable implementation that I'm not sure
we can change it easily.

Anyway, as regards your original question: I don't see any other
non-debug hash_seq_searches of LockMethodProcLockHash, so this
particular issue probably doesn't affect anything except pg_locks.
Nonetheless, holding lock on that table for four msec is not good, so
you could expect to see some performance glitches when you examine
pg_locks.

            regards, tom lane

pgsql-performance by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: pg_lock_status() performance
Next
From: Scott Carey
Date:
Subject: Re: partition question for new server setup