Re: slow inserts - Mailing list pgsql-admin

From Rolf Luettecke
Subject Re: slow inserts
Date
Msg-id 20020321095601.7c3eff8b.rolf.luettecke@michael-telecom.de
Whole thread Raw
In response to Re: slow inserts  (Jodi Kanter <jkanter@virginia.edu>)
List pgsql-admin
Hi Jodi,

> None of the data is actually committed to the database until the scripts
> complete so I believe that autocommit is turned off.
>
what if you try to write the output of your script into a separate file
and pipe it to a psql as input? What i mean is to strip of the processing
time for the "excel-part". Still 25 minutes to do the job?

We often insert data in the same amount (10.000 - 100.000 rows per job)
within a few seconds/minutes. A few months ago I had the same problem
"writing" a dbf-file from postgres-data: The select-statement took
milliseconds, but the conversion into db-format seems to be endless.

BTW: We also had a table (10.000s of rows / daily vacuumed) which was
rather slow during inserts (PostgreSQL 7.1.2). After upgrading to
version 7.1.3 and completely rebuild the tables, the problem went away.

Hope it helps
R. Luettecke

--
MICHAEL TELECOM AG
Bruchheide 34 - 49163 Bohmte
Fon: +49 5471 806-0
rolf.luettecke@michael-telecom.de
http://www.michael-telecom.de

pgsql-admin by date:

Previous
From: Christopher Smith
Date:
Subject: large table support 32,000,000 rows
Next
From: Rasmus Mohr
Date:
Subject: Re: Failure loading TCL/u