Re: Optimize for insertions? - Mailing list pgsql-general

From Jeff Davis
Subject Re: Optimize for insertions?
Date
Msg-id 200111262211.OAA27228@smtp.ucsd.edu
Whole thread Raw
In response to Optimize for insertions?  ("Gurupartap Davis" <partap@yahoo.com>)
List pgsql-general
I think that the fastest way that PostgreSQL can load data into a table is
using COPY.

Try exporting the data from mysql in a text format, for example
tab-delimited. Then, use COPY to load it. That should go *much* faster than
inserts.

If that is for some reason not a viable solution, break the inserts into
large blocks and make each one its own transaction (explicit begin/commit).
This tells postgres when to flush to disk, basically. This might not really
help you since you already turned off fsync.

Hope this helps,
    Jeff Davis

On Monday 26 November 2001 01:17 pm, you wrote:
> Hi,
>
> I'm trying to migrate a mysql database to postgresql.  I've got the tables
> all set up, but inserting the data from the mysql dumps is taking forever.
> I've got about 200 million rows in the big table (growing by about 1.5
> million per day), but at the 100 inserts/second I'm getting, this will take
> over 3 weeks.  MySQL on the same machine averages about 1100-1200
> inserts/second (no, I'm not running them both at the same time ;-)
>
> Any thoughts on how to tweak the postmaster for quick inserts?  I've got
> fsync turned off, but that's about all I've tried so far.
>
> I'm running postgresql 7.1.3 on Linux (Redhat 7.2 with ext3) with a  700MHz
> processor, 256MB RAM, and an 80GB IDE HD
>
> Thanks
> -partap

pgsql-general by date:

Previous
From: "Gurupartap Davis"
Date:
Subject: Optimize for insertions?
Next
From: David A Dickson
Date:
Subject: Pl/Pgsql triger procedure calls with parameters