Re: Massive table (500M rows) update nightmare - Mailing list pgsql-performance

From Carlo Stonebanks
Subject Re: Massive table (500M rows) update nightmare
Date
Msg-id hi6ifm$1bi8$1@news.hub.org
Whole thread Raw
In response to Re: Massive table (500M rows) update nightmare  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: Massive table (500M rows) update nightmare
List pgsql-performance
> It might well be checkpoints.  Have you tried cranking up checkpoint
> segments to something like 100 or more and seeing how it behaves then?

No I haven't, althugh it certainly make sense - watching the process run,
you get this sense that the system occaisionally pauses to take a deep, long
breath before returning to work frantically ;D

Checkpoint_segments are currently set to 64. The DB is large and is on a
constant state of receiving single-row updates as multiple ETL and
refinement processes run continuously.

Would you expect going to 100 or more to make an appreciable difference, or
should I be more aggressive?
>


pgsql-performance by date:

Previous
From: "ramasubramanian"
Date:
Subject: Array comparison
Next
From: "Carlo Stonebanks"
Date:
Subject: Re: Massive table (500M rows) update nightmare