On Dec 4, 2006, at 12:10 PM, Mark Lonsdale wrote:
- 4 physical CPUs (hyperthreaded to 8)
i'd tend to disable hyperthreading on Xeons...
shared_buffers – 50,000 - >From what Id read, increasing this number higher than this wont have any advantages ?
if you can, increase it until your performance no longer increases. i run with about 70k on a server with 8Gb of RAM.
effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?
this number is advisory to Pg. it doesn't allocate resources, rather it tells Pg how much disk cache your OS will provide.
work_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?
you need to evaluate how much memory you need for your queries and then decide if increasing this will help. benchmarking your own use patterns is the only way to do this.
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
I usually do this, too.
fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it.
i usually never need to go more than the default.
fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?
On my big DB server, this sits at 1.2 million pages. You have to check the output of vacuum verbose from time to time to ensure it is not getting out of bounds; if so, you need to either vacuum more often or you need to pack your tables, or increase this parameter.
Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?
They're a good starting point.