Thread: Configuration settings for 32GB RAM server
Hi
We are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification.
- 4 physical CPUs (hyperthreaded to 8)
- 32 GB RAM
- x86_64 architecture
- RedHat AS 4
- postgres 8.1.5
Ive been taking a look at the various postgres tuning parameters, and have come up with the following settings.
shared_buffers – 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?
effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?
work_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it.
fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?
Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?
Thanks
Mark
On Mon, 2006-12-04 at 12:10 -0500, Mark Lonsdale wrote: > - 4 physical CPUs (hyperthreaded to 8) > > - 32 GB RAM > > - x86_64 architecture > > - RedHat AS 4 > > - postgres 8.1.5 > > > > Ive been taking a look at the various postgres tuning parameters, and > have come up with the following settings. > > > > shared_buffers – 50,000 - >From what Id read, increasing this > number higher than this wont have any advantages ? > Where did you read that? You should do some tests. Generally 25% of physical memory on a dedicated box is a good point of reference (on 8.1, anyway). I've heard as high as 50% can give you a benefit, but I haven't seen that myself. > fsm_pages = 200,000 – Based this on some statistics about the number > of pages freed from a vacuum on older server. Not sure if its fair > to calculate this based on vacuum stats of 7.3.4 server? > Might as well make it a higher number because you have a lot of RAM anyway. It's better than running out of space in the FSM, because to increase that setting you need to restart the daemon. Increasing this by 1 only uses 6 bytes. That means you could set it to 10 times the number you currently have, and it would still be insignificant. Regards, Jeff Davis
- 4 physical CPUs (hyperthreaded to 8)
if you can, increase it until your performance no longer increases. i run with about 70k on a server with 8Gb of RAM.shared_buffers – 50,000 - >From what Id read, increasing this number higher than this wont have any advantages ?
effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?
work_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?
I usually do this, too.Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it.
On my big DB server, this sits at 1.2 million pages. You have to check the output of vacuum verbose from time to time to ensure it is not getting out of bounds; if so, you need to either vacuum more often or you need to pack your tables, or increase this parameter.fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?
They're a good starting point.Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?
Attachment
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
Do you know how often and when you will be creating indexes or clustering? We set ours to 2GB because of the performance gains. We've also thought about testing it at 4GB. We can do this because we know during the middle of the night our server load drops to nearly zero. If you know you have windows like that, then I would definitely suggest increasing your maintenance_work_mem. It's halved the time for io intesive tasks like cluster.
On Mon, 2006-12-04 at 12:57, Joshua Marsh wrote: > > On 12/4/06, Mark Lonsdale <mark.lonsdale@wysdm.com> wrote: > Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for > this. > > > > Do you know how often and when you will be creating indexes or > clustering? We set ours to 2GB because of the performance gains. > We've also thought about testing it at 4GB. We can do this because we > know during the middle of the night our server load drops to nearly > zero. If you know you have windows like that, then I would definitely > suggest increasing your maintenance_work_mem. It's halved the time > for io intesive tasks like cluster. > Also, remember that most of those settings (work_mem, maintenance_work_mem) can be changed for an individual session. So, you can leave work_mem at something conservative, like 8 meg, and for a session that is going to run at 2am and iterate over billions of rows, you can throw several gigabytes at it and not worry about that one setting blowing out all the other processes on the machine.
Try both hyperthreaded and not, there's been some evidence that HT helps us now
Hi
We are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification.
- 4 physical CPUs (hyperthreaded to 8)
This is no longer true, 25% of available memory is a good starting place, and go up from there- 32 GB RAM
- x86_64 architecture
- RedHat AS 4
- postgres 8.1.5
Ive been taking a look at the various postgres tuning parameters, and have come up with the following settings.
shared_buffers – 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?
this should be around 3/4 of available memory or 24G
effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?
this is dependent on your applicationwork_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it.
fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?
Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?
Thanks
Mark
Thanks to all for the feedback on this issue.. After reviewing your comments, Im thinking of changing to the following values
shared_buffers = 786432 - If Ive done my math right, then this is 6GB which is 25% of 24GB ( I want to preserve the other 8GB for OS and App )
effective_cache_size = 2359296 - Equates to 18GB, which is 75% of 24GB.. Using the feedback from Dave Cramer
work_mem = 32768
maintenance_work_mem = 1048576 i.e. 1GB
max_fsm_relations = 10,000 – Given the small amount of memory this will use, I figure go large and not worry about it in the future.
max_fsm_pages = 10,000,000 – Again, increasing this significantly to cover my existing vacuuming numbers, and given I have a lot of memory, it seems like its not going to hurt me at all.
Sound good?
From: Dave Cramer [mailto:pg@fastcrypt.com]
Sent: 04 December 2006 23:29
To: Mark Lonsdale
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Configuration settings for 32GB RAM server
On 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote:
Hi
We are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification.
- 4 physical CPUs (hyperthreaded to 8)
Try both hyperthreaded and not, there's been some evidence that HT helps us now
- 32 GB RAM
- x86_64 architecture
- RedHat AS 4
- postgres 8.1.5
Ive been taking a look at the various postgres tuning parameters, and have come up with the following settings.
shared_buffers – 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?
This is no longer true, 25% of available memory is a good starting place, and go up from there
effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?
this should be around 3/4 of available memory or 24G
work_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?
Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.
fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it.
fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?
this is dependent on your application
Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?
autovacuum settings.
Thanks
Mark
On Mon, Dec 04, 2006 at 09:42:57AM -0800, Jeff Davis wrote: > > fsm_pages = 200,000 ??? Based this on some statistics about the number > > of pages freed from a vacuum on older server. Not sure if its fair > > to calculate this based on vacuum stats of 7.3.4 server? > > > > Might as well make it a higher number because you have a lot of RAM > anyway. It's better than running out of space in the FSM, because to > increase that setting you need to restart the daemon. Increasing this by > 1 only uses 6 bytes. That means you could set it to 10 times the number > you currently have, and it would still be insignificant. You can also run vacuumdb -av and look at the last few lines to see what it says you need. Or you can get that info out of contrib/pg_freespacemap. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)