We have been down this path, but I am at the point
that I think this problem is so big, it makes postgreSQL
unusable for me and could cause big problems for
many of us with large databases. Especially if you
need to reload your data, say, going to 6.5.
I have a database that has a dozen or so tables, but
one table in particular has about 60 fields that are all
character fields ranging in size from 16 bytes to 128.
This table has about 150,000 records. I have a text
file with insert statements, one per line, that I feed
through standard input to psql.
I have tried many combinations of things to speed this
up as you all have suggested. I have had no success
using "copy" at all because of problems with quotes
and other punctuation in the data.
This last attempt, I bracket each insert statement with
"begin;" and "end;".
What I am seeing this time around is in the beginning,
the inserts were reasonable in speed. Say 6 or 7
per second. But now that it is up to record 100,000 or
so (3 DAYS later) the time between inserts is about
10 SECONDS. As progress is made, the inserts
continue to get slower and slower. So at the current
rate, I have another 138 hours before completion!
This is on a fast Red Hat 5.2 machine. Plenty of RAM,
Ultra-wide scsi, 450 MHz Pentium II.
No index tables exist in the database while reloading.
I will build them after loading the data.
My biggest fear is as this database continues to grow,
headed toward 500,000 records or more, it seems that
it will reach the point (I may already have) that the data
can not ever be reloaded again, which is a scary
situation.
I'm still searching for help. Thanks.