Re: Speed up transaction completion faster after many relations areaccessed in a transaction - Mailing list pgsql-hackers

From David Rowley
Subject Re: Speed up transaction completion faster after many relations areaccessed in a transaction
Date
Msg-id CAKJS1f-rCUtQNpbNRf-pVWC+iuyc6EU28jDKMOboNV+hO2We3w@mail.gmail.com
Whole thread Raw
In response to RE: Speed up transaction completion faster after many relations areaccessed in a transaction  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Responses RE: Speed up transaction completion faster after many relations areaccessed in a transaction  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
List pgsql-hackers
On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
>
> From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> > I went back to the drawing board on this and I've added some code that counts
> > the number of times we've seen the table to be oversized and just shrinks
> > the table back down on the 1000th time.  6.93% / 1000 is not all that much.
>
> I'm afraid this kind of hidden behavior would appear mysterious to users.  They may wonder "Why is the same query
fastat first in the session (5 or 6 times of execution), then gets slower for a while, and gets faster again?  Is there
somethingto tune?  Am I missing something wrong with my app (e.g. how to use prepared statements)?"  So I prefer v5. 

I personally don't think that's true.  The only way you'll notice the
LockReleaseAll() overhead is to execute very fast queries with a
bloated lock table.  It's pretty hard to notice that a single 0.1ms
query is slow. You'll need to execute thousands of them before you'll
be able to measure it, and once you've done that, the lock shrink code
will have run and the query will be performing optimally again.

I voice my concerns with v5 and I wasn't really willing to push it
with a known performance regression of 7% in a fairly valid case. v6
does not suffer from that.

> > Of course, not all the extra overhead might be from rebuilding the table,
> > so here's a test with the updated patch.
>
> Where else does the extra overhead come from?

hash_get_num_entries(LockMethodLocalHash) == 0 &&
+ hash_get_max_bucket(LockMethodLocalHash) >
+ LOCKMETHODLOCALHASH_SHRINK_THRESHOLD)

that's executed every time, not every 1000 times.

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



pgsql-hackers by date:

Previous
From: David Rowley
Date:
Subject: Re: POC: converting Lists into arrays
Next
From: Michael Paquier
Date:
Subject: Re: pg_receivewal documentation