Trevor Astrope wrote:
> I was wondering if anyone found a sweet spot regarding how many
> inserts to do in a single transaction to get the best performance?
> Is there an approximate number where there isn't any more
> performance to be had or performance may drop off?
>
> It's just a general question...I don't have any specific scenario,
> other than there are multiple backends doing many inserts.
The ideal should be enough to make the work involved in establishing
the transaction context be a small part of the cost of processing the
queries.
Thus, 2 inserts should be twice as good as 1, by virtue of dividing
the cost of the transaction 2 ways.
Increasing the number of inserts/updates to 10 means splitting the
cost 10 ways.
Increasing the number to 1000 means splitting the cost 1000 ways,
which, while better than merely splitting the cost 10 ways, probably
_isn't_ a stunningly huge further improvement.
The benefits of grouping more together drops off; you'll probably NOT
notice much difference between grouping 10,000 updates together into a
transaction as compared to grouping 15,000 updates together.
Fortunately, it doesn't drop off to being downright WORSE.
On Oracle, I have seen performance Suck Badly when using SQL*Load; if
I grouped too many updates together, it started blowing up the
"rollback segment," which was a Bad Thing. And in that kind of
context, there will typically be some "sweet spot" where you want to
commit transactions before they grow too big.
In contrast, pg_dump/pg_restore puts the load of each table into a
single COPY statement, so that if there are 15,000,000 entries in the
table, that gets grouped into a single (rather enormous) transaction.
And doing things that way presents no particular problem.
--
output = ("cbbrowne" "@" "libertyrms.info")
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)