I am currently trying to import a text data file without about 45,000
records. At the end of the import it does an update on each of the 45,000
records. Doing all of the inserts completes in a fairly short amount of
time (about 2 1/2 minutes). Once it gets to the the updates though it slows
to a craw. After about 10 minutes it's only done about 3000 records.
Is that normal? Is it because it's inside such a large transaction? Is
there anything I can do to speed that up. It seems awfully slow to me.
I didn't think that giving it more shared buffers would help but I tried
anyway. It didn't help.
I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
of stuff but it didn't speed up the updates at all.
I am using a dual 800mhz xeon box with 2 gb of ram. I've tried anywhere
from about 16,000 to 65000 shared buffers.
What other factors are involved here?