Thread: pg_lock_status() performance
I have a unloaded development server running 8.4b1 that is returning from a 'select * from pg_locks' in around 5 ms. While the time itself is not a big deal, I was curious and tested querying locks on a fairly busy (200-500 tps sustained) running 8.2 on inferior hardware. This returned (after an initial slower time) in well under 1 ms most of the time. Is this noteworthy? What factors slow down best case pg_lock_status() performance? edit: I bet it's the max_locks_per_transaction parameter. I really cranked it on the dev box during an experiment, to 16384. testing...yup that's it. Are there any negative performance side-effects that could result from (perhaps overly) cranked max_locks_per_transaction? merlin
Merlin Moncure <mmoncure@gmail.com> writes: > I have a unloaded development server running 8.4b1 that is returning > from a 'select * from pg_locks' in around 5 ms. While the time itself > is not a big deal, I was curious and tested querying locks on a fairly > busy (200-500 tps sustained) running 8.2 on inferior hardware. This > returned (after an initial slower time) in well under 1 ms most of the > time. Is this noteworthy? What factors slow down best case > pg_lock_status() performance? > edit: I bet it's the max_locks_per_transaction parameter. I really > cranked it on the dev box during an experiment, to 16384. > testing...yup that's it. Are there any negative performance > side-effects that could result from (perhaps overly) cranked > max_locks_per_transaction? [squint...] AFAICS the only *direct* cost component in pg_lock_status is the number of locks actually held or awaited. If there's a noticeable component that depends on max_locks_per_transaction, it must be from hash_seq_search() iterating over empty hash buckets. Which is a mighty tight loop. What did you have max_connections set to? regards, tom lane
On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Merlin Moncure <mmoncure@gmail.com> writes: >> I have a unloaded development server running 8.4b1 that is returning >> from a 'select * from pg_locks' in around 5 ms. While the time itself >> is not a big deal, I was curious and tested querying locks on a fairly >> busy (200-500 tps sustained) running 8.2 on inferior hardware. This >> returned (after an initial slower time) in well under 1 ms most of the >> time. Is this noteworthy? What factors slow down best case >> pg_lock_status() performance? > >> edit: I bet it's the max_locks_per_transaction parameter. I really >> cranked it on the dev box during an experiment, to 16384. >> testing...yup that's it. Are there any negative performance >> side-effects that could result from (perhaps overly) cranked >> max_locks_per_transaction? > > [squint...] AFAICS the only *direct* cost component in pg_lock_status > is the number of locks actually held or awaited. If there's a > noticeable component that depends on max_locks_per_transaction, it must > be from hash_seq_search() iterating over empty hash buckets. Which is > a mighty tight loop. What did you have max_connections set to? 16384 :D (I was playing with a function that created a large number of tables/schemas) merlin
On Tue, Apr 28, 2009 at 5:42 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Merlin Moncure <mmoncure@gmail.com> writes: >>> I have a unloaded development server running 8.4b1 that is returning >>> from a 'select * from pg_locks' in around 5 ms. While the time itself >>> is not a big deal, I was curious and tested querying locks on a fairly >>> busy (200-500 tps sustained) running 8.2 on inferior hardware. This >>> returned (after an initial slower time) in well under 1 ms most of the >>> time. Is this noteworthy? What factors slow down best case >>> pg_lock_status() performance? >> >>> edit: I bet it's the max_locks_per_transaction parameter. I really >>> cranked it on the dev box during an experiment, to 16384. >>> testing...yup that's it. Are there any negative performance >>> side-effects that could result from (perhaps overly) cranked >>> max_locks_per_transaction? >> >> [squint...] AFAICS the only *direct* cost component in pg_lock_status >> is the number of locks actually held or awaited. If there's a >> noticeable component that depends on max_locks_per_transaction, it must >> be from hash_seq_search() iterating over empty hash buckets. Which is >> a mighty tight loop. What did you have max_connections set to? > > 16384 :D > > (I was playing with a function that created a large number of tables/schemas) oops. misread that...the default 100. merlin
Merlin Moncure <mmoncure@gmail.com> writes: >> On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> [squint...] �AFAICS the only *direct* cost component in pg_lock_status >>> is the number of locks actually held or awaited. �If there's a >>> noticeable component that depends on max_locks_per_transaction, it must >>> be from hash_seq_search() iterating over empty hash buckets. �Which is >>> a mighty tight loop. �What did you have max_connections set to? > oops. misread that...the default 100. Hmm ... so we are talking about 1638400 vs 6400 hash buckets ... if that adds 4 msec to your query time then it's taking about 2.5 nsec per empty bucket, which I guess is not out of line for three lines of C code. So that does seem to be the issue. We've noticed before that hash_seq_search() can be a bottleneck for large lightly-populated hash tables. I wonder if there's a good way to reimplement it to avoid having to scan empty buckets? There are enough constraints on the hashtable implementation that I'm not sure we can change it easily. Anyway, as regards your original question: I don't see any other non-debug hash_seq_searches of LockMethodProcLockHash, so this particular issue probably doesn't affect anything except pg_locks. Nonetheless, holding lock on that table for four msec is not good, so you could expect to see some performance glitches when you examine pg_locks. regards, tom lane