Uh, I feel a little silly now. I had and index on the field in question
(needed to locate the row to update) but later recreated the table and
forgot to readd it. I had assumed that it was there but double checked just
now and it was gone. I then readded the index and and it finished in a few
minutes.
Sorry about that one. Thanks for the help.
rg
----- Original Message -----
From: "Mike Mascari" <mascarm@mascari.com>
To: "Rick Gigger" <rick@alpinenetworking.com>
Cc: "PgSQL General ML" <pgsql-general@postgresql.org>
Sent: Tuesday, November 18, 2003 2:03 PM
Subject: Re: [GENERAL] performance problem
> Rick Gigger wrote:
>
> > I am currently trying to import a text data file without about 45,000
> > records. At the end of the import it does an update on each of the
45,000
> > records. Doing all of the inserts completes in a fairly short amount of
> > time (about 2 1/2 minutes). Once it gets to the the updates though it
slows
> > to a craw. After about 10 minutes it's only done about 3000 records.
> >
> > Is that normal? Is it because it's inside such a large transaction? Is
> > there anything I can do to speed that up. It seems awfully slow to me.
> >
> > I didn't think that giving it more shared buffers would help but I tried
> > anyway. It didn't help.
> >
> > I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a
lot
> > of stuff but it didn't speed up the updates at all.
> >
> > I am using a dual 800mhz xeon box with 2 gb of ram. I've tried anywhere
> > from about 16,000 to 65000 shared buffers.
> >
> > What other factors are involved here?
>
> It is difficult to say without knowing either the definition of the
> relation(s) or the update queries involved. Are there indexes being
> created after the import that would allow PostgreSQL to locate the
> rows being updated quickly, or is the update an unqualified update (no
> WHERE clause) that affects all tuples?
>
> EXPLAIN ANALYZE is your friend...
>
> Mike Mascari
> mascarm@mascari.com
>
>
>