Re: performance problem - Mailing list pgsql-general

From Dann Corbit
Subject Re: performance problem
Date
Msg-id D90A5A6C612A39408103E6ECDD77B829408C5D@voyager.corporate.connx.com
Whole thread Raw
In response to performance problem  ("Rick Gigger" <rick@alpinenetworking.com>)
List pgsql-general
> -----Original Message-----
> From: Rick Gigger [mailto:rick@alpinenetworking.com]
> Sent: Tuesday, November 18, 2003 12:43 PM
> To: PgSQL General ML
> Subject: [GENERAL] performance problem
>
>
> I am currently trying to import a text data file without
> about 45,000 records.  At the end of the import it does an
> update on each of the 45,000 records.  Doing all of the
> inserts completes in a fairly short amount of time (about 2
> 1/2 minutes).  Once it gets to the the updates though it
> slows to a craw.  After about 10 minutes it's only done about
> 3000 records.
>
> Is that normal?

No way to know, from what you have told us.

> Is it because it's inside such a large
> transaction?  Is there anything I can do to speed that up.
> It seems awfully slow to me.
>
> I didn't think that giving it more shared buffers would help
> but I tried anyway.  It didn't help.
>
> I tried doing a analyze full on it (vacuumdb -z -f) and it
> cleaned up a lot of stuff but it didn't speed up the updates at all.
>
> I am using a dual 800mhz xeon box with 2 gb of ram.  I've
> tried anywhere from about 16,000 to 65000 shared buffers.
>
> What other factors are involved here?

What is the structure of the tables?  What sort of indexes are involved?
Is the query using a prepared statement?  Can you perform the entire
update in a single SQL statement?  What is the cardinality of the tables
involved?

Provide your exact code that you use to perform the update.

Provide the SQL schema for the every table used in the queries.

Perform an analyze for the query that is slow.  What does it say?

pgsql-general by date:

Previous
From: Mike Mascari
Date:
Subject: Re: performance problem
Next
From: "scott.marlowe"
Date:
Subject: Re: 7.4RC2 vs 7.4