Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit - Mailing list pgsql-performance

From Pavan Deolasee
Subject Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Date
Msg-id 2e78013d0803102324i66fa5376rca1bc3d250bc8317@mail.gmail.com
Whole thread Raw
In response to Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit  ("Heikki Linnakangas" <heikki@enterprisedb.com>)
List pgsql-performance
On Mon, Mar 10, 2008 at 4:31 PM, Heikki Linnakangas
<heikki@enterprisedb.com> wrote:
> According
>  to oprofile, all the time is spent in TransactionIdIsInProgress. I think
>  it would be pretty straightforward to store the committed subtransaction
>  ids in a sorted array, instead of a linked list, and binary search.

Assuming that in most of the cases, there will be many committed and few aborted
subtransactions, how about storing the list of *aborted* subtransactions ?


Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

pgsql-performance by date:

Previous
From: Greg Smith
Date:
Subject: Re: UPDATE 66k rows too slow
Next
From: Albert Cervera Areny
Date:
Subject: Re: count * performance issue