On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:
> shared_buffers= 262143
You should at least try some runs with this set far, far larger. At
least 10% of memory, but it'd be nice to see what happens with this set
to 50% or higher as well (though don't set it larger than the database
since it'd be a waste).
How big is the database, anyway?
> # on our current best production server with 4GB RAM (not dedicated to
> Postgres), work_mem is set to 600 MB
> # this limitation is probably the bottleneck for our application as the
> files in pgsql_tmp grows up to 15 GB
> # during large aggregations (we have a locking mechanismus to avoid
> parallel processing of such transactions)
Keep in mind that a good filesystem will be caching most of pgsql_tmp if
it can.
> max_prepared_transaction = 100
Are you using 2PC? If not, there's no reason to touch this (could could
just set it to 0).
> # I use the default for the bgwriter as I couldnt find recommendation on
> those
>
> #bgwriter_delay = 200ms # 10-10000ms between rounds
> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers
> scanned/round
> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max
> written/round
> #bgwriter_all_percent = 0.333 # 0-100% of all buffers
> scanned/round
> #bgwriter_all_maxpages = 5 # 0-1000 buffers max
> written/round
You'll probably want to increase both maxpages parameters substantially,
assuming that you've got good IO hardware.
> #CHECKPOINT
>
> # xlog will be on a separate disk
> checkpoint_segments=256
>
> checkpoint_timeout = 5min
The further apart your checkpoints, the better. Might want to look at 10
minutes. I'd also set checkpoint_warning to just a bit below
checkpoint_timeout and watch for warnings to make sure you're not
checkpointing a lot more frequently than you're expecting.
--
Jim Nasby decibel@decibel.org
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)