performance problem - Mailing list pgsql-general

From Rick Gigger
Subject performance problem
Date
Msg-id 01b201c3ae14$92bd3c00$0700a8c0@trogdor
Whole thread Raw
In response to Point-in-time data recovery - v.7.4  (Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no>)
Responses Re: performance problem
Re: performance problem
Re: performance problem
List pgsql-general
I am currently trying to import a text data file without about 45,000
records.  At the end of the import it does an update on each of the 45,000
records.  Doing all of the inserts completes in a fairly short amount of
time (about 2 1/2 minutes).  Once it gets to the the updates though it slows
to a craw.  After about 10 minutes it's only done about 3000 records.

Is that normal?  Is it because it's inside such a large transaction?  Is
there anything I can do to speed that up.  It seems awfully slow to me.

I didn't think that giving it more shared buffers would help but I tried
anyway.  It didn't help.

I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
of stuff but it didn't speed up the updates at all.

I am using a dual 800mhz xeon box with 2 gb of ram.  I've tried anywhere
from about 16,000 to 65000 shared buffers.

What other factors are involved here?


pgsql-general by date:

Previous
From: CoL
Date:
Subject: Re: indexing with lower(...) -> queries are not optimised very well
Next
From: Karsten Hilbert
Date:
Subject: Re: uploading files