Merlin Moncure <mmoncure@gmail.com> writes:
>> On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> [squint...] �AFAICS the only *direct* cost component in pg_lock_status
>>> is the number of locks actually held or awaited. �If there's a
>>> noticeable component that depends on max_locks_per_transaction, it must
>>> be from hash_seq_search() iterating over empty hash buckets. �Which is
>>> a mighty tight loop. �What did you have max_connections set to?
> oops. misread that...the default 100.
Hmm ... so we are talking about 1638400 vs 6400 hash buckets ... if that
adds 4 msec to your query time then it's taking about 2.5 nsec per empty
bucket, which I guess is not out of line for three lines of C code.
So that does seem to be the issue.
We've noticed before that hash_seq_search() can be a bottleneck for
large lightly-populated hash tables. I wonder if there's a good way
to reimplement it to avoid having to scan empty buckets? There are
enough constraints on the hashtable implementation that I'm not sure
we can change it easily.
Anyway, as regards your original question: I don't see any other
non-debug hash_seq_searches of LockMethodProcLockHash, so this
particular issue probably doesn't affect anything except pg_locks.
Nonetheless, holding lock on that table for four msec is not good, so
you could expect to see some performance glitches when you examine
pg_locks.
regards, tom lane