Re: Performance large tables. - Mailing list pgsql-general

From William Yu
Subject Re: Performance large tables.
Date
Msg-id dnhgcl$1bhk$1@news.hub.org
Whole thread Raw
In response to Re: Performance large tables.  (Benjamin Arai <barai@cs.ucr.edu>)
List pgsql-general
Benjamin Arai wrote:
> For the most part the updates are simple one liners.  I currently commit
> in large batch to increase performance but it still takes a while as
> stated above.  From evaluating the computers performance during an
> update,  the system is thrashing both memory and disk.  I am currently
> using Postgresql 8.0.3.
>
> Example command "UPDATE data where name=x and date=y;".

Before you start throwing the baby out with the bathwater by totally
revamping your DB architecture, try some simple debugging first to see
why these queries take a long time. Use explain analyze, test
vacuuming/analyzing mid-updates, fiddle with postgresql.conf parameters
(the wal/commit settings especially). Try using using commit w/
different amounts of transactions -- the optimal # won't be the same
across all development tools.

My own experience is that periodic vacuuming & analyzing are very much
needed for batches of small update commands. For our batch processing,
autovacuum plus 1K-10K commit batches did the trick in keeping
performance up.

pgsql-general by date:

Previous
From: Hannes Dorbath
Date:
Subject: TSearch2: Auto identify document language?
Next
From: Frank van Vugt
Date:
Subject: PL/pgSQL : notion of deferred execution