Re: Bunching "transactions" - Mailing list pgsql-performance

From Chris Browne
Subject Re: Bunching "transactions"
Date
Msg-id 60k5pbcc50.fsf@dba2.int.libertyrms.com
Whole thread Raw
In response to Bunching "transactions"  (Jean-David Beyer <jeandavid8@verizon.net>)
Responses Re: Bunching "transactions"
List pgsql-performance
jeandavid8@verizon.net (Jean-David Beyer) writes:
> But what is the limitation on such a thing? In this case, I am just
> populating the database and there are no other users at such a time. I am
> willing to lose the whole insert of a file if something goes wrong -- I
> would fix whatever went wrong and start over anyway.
>
> But at some point, disk IO would have to be done. Is this just a function of
> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it
> have to do with wal_buffers and checkpoint_segments?

I have done bulk data loads where I was typically loading hundreds of
thousands of rows in as a single transaction, and it is worth
observing that loading in data from a pg_dump will do exactly the same
thing, where, in general, each table's data is loaded as a single
transaction.

It has tended to be the case that increasing the number of checkpoint
segments is helpful, though it's less obvious that this is the case in
8.2 and later versions, what with the ongoing changes to checkpoint
flushing.

In general, this isn't something that typically needs to get tuned
really finely; if you tune your DB, in general, "pretty big
transactions" should generally work fine, up to rather large sizes of
"pretty big."
--
"cbbrowne","@","acm.org"
http://linuxdatabases.info/info/languages.html
"Why use Windows, since there is a door?"
-- <fachat@galileo.rhein-neckar.de> Andre Fachat

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Bunching "transactions"
Next
From: Jean-David Beyer
Date:
Subject: Re: Bunching "transactions"