Re: Savepoints in transactions for speed? - Mailing list pgsql-performance

From Claudio Freire
Subject Re: Savepoints in transactions for speed?
Date
Msg-id CAGTBQpZ-NLHeZPnD9m2O-UZfMba14t1aU2K73c0tOvz_w232LQ@mail.gmail.com
Whole thread Raw
In response to Re: Savepoints in transactions for speed?  (Mike Blackwell <mike.blackwell@rrd.com>)
Responses Re: Savepoints in transactions for speed?  (Mike Blackwell <mike.blackwell@rrd.com>)
Re: Savepoints in transactions for speed?  (Jeff Davis <pgsql@j-davis.com>)
List pgsql-performance
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell <mike.blackwell@rrd.com> wrote:
>
> > Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.
>
> I thought I had read something at one point about keeping the transaction size on the order of a couple thousand
becausethere were issues when it got larger.  As that apparently is not an issue I went ahead and tried the DELETE and
COPYin a transaction.  The load time is quite reasonable this way. 

Updates, are faster if batched, if your business logic allows it,
because it creates less bloat and creates more opportunities for with
HOT updates. I don't think it applies to inserts, though, and I
haven't heard it either.

In any case, if your business logic doesn't allow it (and your case
seems to suggest it), there's no point in worrying.


pgsql-performance by date:

Previous
From: Mike Blackwell
Date:
Subject: Re: Savepoints in transactions for speed?
Next
From: Craig Ringer
Date:
Subject: Re: Hints (was Poor performance using CTE)