Re: exceptionally large UPDATE - Mailing list pgsql-general

From Vick Khera
Subject Re: exceptionally large UPDATE
Date
Msg-id AANLkTikQRvni6ZmeHFJ7KBi-5aYsHYk8N-c1BZ-tJVZ7@mail.gmail.com
Whole thread Raw
In response to Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Responses Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
List pgsql-general
On Thu, Oct 28, 2010 at 1:06 PM, Ivan Sergio Borgonovo
<mail@webthatworks.it> wrote:
> What I'm planning to do is:
> max_connections = 5
> shared_buffers = 240M
> work_mem = 90MB
> maintenance_work_mem = 1GB
> max_fsm_pages = 437616
> max_fsm_relations = 1200
> checkpoint_segments = 70
> default_statistics_target = 30
> #log_min_duration_statement = 1000
>

default_statistics_target = 100 is the new "default" for newer
postgres, and with good reason... try that.

if you boost your checkpoint_segments, also twiddle the
checkpoint_timeout (increase it) and checkpoint_completion_target
(something like 0.8 would be good, depending on how fast your disks
are) values to try to smooth out your I/O (ie, keep it from bursting
at checkpoint timeout).  Is 5 connections really enough for you?

And like I said before, you can set the work_mem and/or
maintenance_work_mem on a per-connection basis as needed, so for your
big update you can increase those values just during that work without
affecting the rest of the system.

pgsql-general by date:

Previous
From: Dean Gibson AE7Q
Date:
Subject: 9.0 replication -- multiple hot_standby servers
Next
From: Tom Lane
Date:
Subject: Re: create table as select VS create table; insert as select