Re: performance problem - Mailing list pgsql-general

From Mike Mascari
Subject Re: performance problem
Date
Msg-id 3FBA8911.5060805@mascari.com
Whole thread Raw
In response to performance problem  ("Rick Gigger" <rick@alpinenetworking.com>)
List pgsql-general
Rick Gigger wrote:

> I am currently trying to import a text data file without about 45,000
> records.  At the end of the import it does an update on each of the 45,000
> records.  Doing all of the inserts completes in a fairly short amount of
> time (about 2 1/2 minutes).  Once it gets to the the updates though it slows
> to a craw.  After about 10 minutes it's only done about 3000 records.
>
> Is that normal?  Is it because it's inside such a large transaction?  Is
> there anything I can do to speed that up.  It seems awfully slow to me.
>
> I didn't think that giving it more shared buffers would help but I tried
> anyway.  It didn't help.
>
> I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
> of stuff but it didn't speed up the updates at all.
>
> I am using a dual 800mhz xeon box with 2 gb of ram.  I've tried anywhere
> from about 16,000 to 65000 shared buffers.
>
> What other factors are involved here?

It is difficult to say without knowing either the definition of the
relation(s) or the update queries involved. Are there indexes being
created after the import that would allow PostgreSQL to locate the
rows being updated quickly, or is the update an unqualified update (no
WHERE clause) that affects all tuples?

EXPLAIN ANALYZE is your friend...

Mike Mascari
mascarm@mascari.com



pgsql-general by date:

Previous
From: Christopher Murtagh
Date:
Subject: 7.4RC2 vs 7.4
Next
From: "Dann Corbit"
Date:
Subject: Re: performance problem