Hi,
What you probably want to do is export the mysql data into delimited text files
and use Postgresql's COPY command. It's much faster than doing straight
INSERT's.
If you have to do inserts for some reason, it might be helpful to group them
into chunks of N insert operations each, and put each chunk in a transaction.
I'm not sure what a good value for N would be though, and this wouldn't be
nearly as fast as the COPY command in any case.
HTH,
Wes Sheldahl
"Gurupartap Davis" <partap%yahoo.com@interlock.lexmark.com> on 11/26/2001
04:17:00 PM
To: pgsql-general%postgresql.org@interlock.lexmark.com
cc: (bcc: Wesley Sheldahl/Lex/Lexmark)
Subject: [GENERAL] Optimize for insertions?
Hi,
I'm trying to migrate a mysql database to postgresql. I've got the tables all
set up, but inserting the data from the mysql dumps is taking forever. I've
got about 200 million rows in the big table (growing by about 1.5 million per
day), but at the 100 inserts/second I'm getting, this will take over 3 weeks.
MySQL on the same machine averages about 1100-1200 inserts/second (no, I'm not
running them both at the same time ;-)
Any thoughts on how to tweak the postmaster for quick inserts? I've got fsync
turned off, but that's about all I've tried so far.
I'm running postgresql 7.1.3 on Linux (Redhat 7.2 with ext3) with a 700MHz
processor, 256MB RAM, and an 80GB IDE HD
Thanks
-partap