Re: Savepoints in transactions for speed? - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Savepoints in transactions for speed?
Date
Msg-id CAMkU=1yTfoSYjc3QT2U91pmTAxKodTmpD2wQTrRAd_dara24TQ@mail.gmail.com
Whole thread Raw
In response to Re: Savepoints in transactions for speed?  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-performance
On Thu, Nov 29, 2012 at 11:58 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
> On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis <pgsql@j-davis.com> wrote:
>>
>> I tried a quick test with 2M tuples and 3 indexes over int8, numeric,
>> and text (generated data). There was also an unindexed bytea column.
>> Using my laptop, a full update of the int8 column (which is indexed,
>> forcing cold updates) took less than 4 minutes.
>>
>> I'm sure there are other issues with real-world workloads, and I know
>> that it's wasteful compared to something that can make use of HOT
>> updates. But unless there is something I'm missing, it's not really
>> worth the effort to batch if that is the size of the update.
>
> On a pre-production database I have (that is currently idle), on a
> server with 4G RAM and a single SATA disk (probably similar to your
> laptop in that sense more or less, possibly more TPS since the HD rpm
> is 7k and your laptop probably is 5k), it's been running for half an
> hour and is still running (and I don't expect it to finish today if
> past experience is to be believed).

So probably Jeff Davis's indexes fit in RAM (or the part that can be
dirtied without causing thrashing), and yours do not.

But, does batching them up help at all?  I doubt it does.

Cheers,

Jeff


pgsql-performance by date:

Previous
From: Niels Kristian Schjødt
Date:
Subject: Re: Do I have a hardware or a software problem?
Next
From: Andrew Dunstan
Date:
Subject: track_activity_query_size