Re: Postgresql.conf cleanup - Mailing list pgsql-hackers

From Greg Smith
Subject Re: Postgresql.conf cleanup
Date
Msg-id Pine.GSO.4.64.0707021418380.11149@westnet.com
Whole thread Raw
In response to Re: Postgresql.conf cleanup  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Postgresql.conf cleanup  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Mon, 2 Jul 2007, Tom Lane wrote:

>> # wal_buffers = 1MB
> Is there really evidence in favor of such a high setting for this,
> either?

I noticed consistant improvements in throughput on pgbench results with 
lots of clients going from the default to 256KB, flatlining above that; it 
seemed sufficiently large for any system I've used.  I've taken to using 
1MB anyway nowadays because others suggested that number, and it seemed to 
be well beyond the useful range and thus never likely to throttle 
anything.  Is there any downside to it being larger than necessary beyond 
what seems like a trivial amount of additional RAM?

>> # checkpoint_segments = 8 to 16 if you have the disk space (0.3 to 0.6 GB)
> This seems definitely too small --- for write-intensive databases I like
> to set it to 30 or so, which should eat about a GB if I did the
> arithmetic right.

You did--I approximate larger values in my head by saying 1GB at 30 
segments and scaling up from there.  But don't forget this is impacted by 
the LDC change, with the segments expected to be active now

(2 + checkpoint_completion_target) * checkpoint_segments + 1

so with a default install setting the segments to 30 will creep that up to 
closer to a 1.2GB footprint.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: SOLVED: unexpected EIDRM on Linux
Next
From: Eric
Date:
Subject: Re: GiST consistent function, expected arguments; multi-dimensional indexes