Re: Perfomance Tuning - Mailing list pgsql-performance

From Dennis Björklund
Subject Re: Perfomance Tuning
Date
Msg-id Pine.LNX.4.44.0308130819460.2191-100000@zigo.dhs.org
Whole thread Raw
In response to Re: Perfomance Tuning  (mixo <mixo@coza.net.za>)
List pgsql-performance
On Tue, 12 Aug 2003, mixo wrote:

> that I am currently importing data into Pg which is about 2.9 Gigs.
> Unfortunately, to maintain data intergrity, data is inserted into a table
> one row at a time.'

So you don't put a number of inserts into one transaction?

If you don't do that then postgresql will treat each command as a
transaction and each insert is going to be forced out on disk (returning
when the data is just in some cache is not safe even if other products
might do that). If you don't do this then the server promise the client
that the row have been stored but then the server goes down and the row
that was in the cache is lost. It's much faster but not what you expect
from a real database.

So, group the inserts in transactions with maybe 1000 commands each. It
will go much faster. It can then cache the rows and in the end just make
sure all 1000 have been written out on disk.

There is also a configuration variable that can tell postgresql to not
wait until the insert is out on disk, but that is not recomended if you
value your data.

And last, why does it help integrity to insert data one row at a time?

--
/Dennis


pgsql-performance by date:

Previous
From: Ron Johnson
Date:
Subject: Re: Perfomance Tuning
Next
From: "Christopher Kings-Lynne"
Date:
Subject: Re: Perfomance Tuning