Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From Ludwik Dylag
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id 2fe468a21001070838n40f1cbb5y19b34c5c4748a62d@mail.gmail.com
Whole thread Raw
In response to Re: Massive table (500M rows) update nightmare  (Leo Mannhart <leo.mannhart@beecom.ch>)
List pgsql-performance
I would suggest:
1. turn off autovacuum
1a. ewentually tune db for better performace for this kind of operation (cant not help here)
2. restart database
3. drop all indexes
4. update
5. vacuum full table
6. create indexes
7. turn on autovacuum

Ludwik


2010/1/7 Leo Mannhart <leo.mannhart@beecom.ch>
Kevin Grittner wrote:
> Leo Mannhart <leo.mannhart@beecom.ch> wrote:
>
>> You could also try to just update the whole table in one go, it is
>> probably faster than you expect.
>
> That would, of course, bloat the table and indexes horribly.  One
> advantage of the incremental approach is that there is a chance for
> autovacuum or scheduled vacuums to make space available for re-use
> by subsequent updates.
>
> -Kevin
>

ouch...
thanks for correcting this.
... and forgive an old man coming from Oracle ;)

Leo

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
Ludwik Dyląg

pgsql-performance by date:

Previous
From: Craig James
Date:
Subject: Re: Air-traffic benchmark
Next
From: "Kevin Grittner"
Date:
Subject: Re: Massive table (500M rows) update nightmare