Re: [PATCHES] Automatic adjustment of bgwriter_lru_maxpages - Mailing list pgsql-performance

From Greg Smith
Subject Re: [PATCHES] Automatic adjustment of bgwriter_lru_maxpages
Date
Msg-id Pine.GSO.4.64.0705152229580.6410@westnet.com
Whole thread Raw
List pgsql-performance
On Tue, 15 May 2007, Jim C. Nasby wrote:

> Moving to -performance.

No, really, moved to performance now.

> On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:
>> What is the current thinking on bg_writer setttings for systems such as
>> 4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?

First off, the primary purpose of both background writers are to keep the
individual client backends from stalling to wait for disk I/O.  If you're
running a batch workload, and there isn't a specific person waiting for a
response, the background writer isn't as critical to worry about.

As Jim already said, tuning the background writer well really requires a
look at the usage profile of your buffer pool and some thinking about your
I/O capacity just as much as it does your CPU/memory situation.

For the first part, I submitted a patch that updates the
contrib/pg_buffercache module to show the usage count information of your
buffer cache.  The LRU writer only writes things with a usage_count of 0,
so taking some snapshots of that data regularly will give you an idea
whether you can useful use it or whether you'd be better off making the
all scan more aggressive.  It's a simple patch that only effects a contrib
module you can add and remove easily, I would characterize it as pretty
safe to apply even to a production system as long as you're doing the
initial tests off-hours.  The patch is at

http://archives.postgresql.org/pgsql-patches/2007-03/msg00555.php

And the usual summary query I run after installing it in a database is:

select usagecount,count(*),isdirty from pg_buffercache group by
isdirty,usagecount order by isdirty,usagecount;

As for the I/O side of things, I'd suggest you compute a worst-case
scenario for how many disk writes will happen if every buffer the
background writer comes across is dirty and base your settings on what
you're comfortable with there.  Say you kept the default interval of 200ms
but increased the maximum pages value to 1000; each writer could
theoretically push 1000 x 8KB x 5/second = 40MB/s worth of data to disk.
Since these are database writes that have to be interleaved with reads,
the sustainable rate here is not as high as you might think.  You might
get a useful performance boost just pushing the max numbers from the
defaults to up into the couple of hundred range--with the amount of RAM
you probably have decided to the buffer cache even the default small
percentages will cover a lot of ground and might need to be increased.  I
like 250 as a round number because it makes for at most an even 10MB a
second flow out per writer.  I wouldn't go too high on the max writes per
pass unless you're in a position to run some good tests to confirm you're
not actually making things worse.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Disk Fills Up and fsck "Compresses" it
Next
From: Greg Smith
Date:
Subject: New performance documentation released