Re: Planner performance extremely affected by an hanging transaction (20-30 times)? - Mailing list pgsql-performance

From Bartłomiej Romański
Subject Re: Planner performance extremely affected by an hanging transaction (20-30 times)?
Date
Msg-id CAC6=Lj6FtkzgnG7-sZYGYYsU--HXJhhXAXDDic2pTO34_UMbCg@mail.gmail.com
Whole thread Raw
In response to Re: Planner performance extremely affected by an hanging transaction (20-30 times)?  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-performance
As a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...

Actually, this is not what we observe. The performance goes back to the normal level immediately after committing or aborting the transaction.



On Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Tue, Sep 24, 2013 at 11:03 AM, didier <did447@gmail.com> wrote:
Hi


On Tue, Sep 24, 2013 at 5:01 PM, <jesper@krogh.cc> wrote:

Apparently it is waiting for locks, cant the check be make in a
"non-blocking" way, so if it ends up waiting for a lock then it just
assumes non-visible and moves onto the next non-blocking?

Not only, it's a reason but you can get the same slow down with only  one client and a bigger insert.

The issue is that a btree search for one value  degenerate to a linear search other  1000 or more tuples.

As a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...

Have you experimentally verified the last part?  btree indices have some special kill-tuple code which should remove aborted tuples from the index the first time they are encountered, without need for a vacuum.

Cheers,

Jeff 

pgsql-performance by date:

Previous
From: Jeff Janes
Date:
Subject: Re: Planner performance extremely affected by an hanging transaction (20-30 times)?
Next
From: didier
Date:
Subject: Re: Planner performance extremely affected by an hanging transaction (20-30 times)?