Thread: shared_buffer advice
I have a server with Dual-Core 4 cpu and 32 GB RAM.
This is the database server. Only Postgresql 8.1 is running on it with multiple databases.
How should I plan for shared_buffers and effective cache size?
Any idea please.
This is the database server. Only Postgresql 8.1 is running on it with multiple databases.
How should I plan for shared_buffers and effective cache size?
Any idea please.
On Mon, Jun 14, 2010 at 1:36 AM, AI Rumman <rummandba@gmail.com> wrote: > I have a server with Dual-Core 4 cpu and 32 GB RAM. > This is the database server. Only Postgresql 8.1 is running on it with > multiple databases. > > How should I plan for shared_buffers and effective cache size? > Any idea please. Well, that really depends on what you're doing with this machine. If you're maintaining a small, high turn over db (say < 1Gig) then setting shared_buffers real high won't help at all, and may hurt performance. Your OS matters. Most observed behaviour on windows says that having a large shared_buffers doesn't help and may in fact hurt performance, no matter how big your dataset. The general rule of thumb I use is to model how much data you've got being operated on at a time, in memory, and make sure it's bigger than that, if you can. I.e. if we'd have say 1G of data being operated on at once, then I'd want 2G to 4G of shared_buffers so I'd be sure it always fit into ram and that interleaved accesses won't bump each other out of the shared_buffers. Note that this is on a database much bigger than 1G itself, it's just that all the various live connections at any one time into it might be operating on about 1G at once. Note that on another server that handles sessions, the shared_buffers are something like 512Megs. No fsync, as they're completely replaceable at any time with a config change / auto failover. They need to make updates fast, every time. And they get a lot of updates on the same records all the time, but they're all small records, in the 1K to 2K range. Having a large shared_buffers here just makes those machines slower. And with fsync off the major limit on commit speed was the size of shared_buffers to keep bg writing. So, what are you doing with your servers?
On Mon, Jun 14, 2010 at 1:36 AM, AI Rumman <rummandba@gmail.com> wrote: > I have a server with Dual-Core 4 cpu and 32 GB RAM. > This is the database server. Only Postgresql 8.1 is running on it with > multiple databases. > > How should I plan for shared_buffers and effective cache size? > Any idea please. Note that effective cache size is way simpler. Add up the size of the kernel cache and shared_buffers, set it to that if db is the only thing on the machine. If it shares it then maybe cut it down a bit. It's not a real sensitive setting to a few gigabytes here or there, and allocates nothing, just helps the planner decide teh % chance something is in cache or not.
AI Rumman wrote: > I have a server with Dual-Core 4 cpu and 32 GB RAM. > This is the database server. Only Postgresql 8.1 is running on it with > multiple databases. > How should I plan for shared_buffers and effective cache size? Set shared_buffers very low--at most 128MB--because you're running PostgreSQL 8.1. If you upgrade to 8.3 or later, which you should, you might explore higher values. On 8.1 and 8.2, trying to use any large size for shared_buffers makes the whole database freeze under any sort of heavy write load when checkpoints happen. The first question you should be asking is not "how can I tune the parameters on my 8.1 server?", it should be "how can I upgrade this 8.1 server to a newer version?". That's not always possible, because some 8.1 applications won't run on 8.3 or later without changes to them. But moving onto 8.3 is by far a better way to get better performance from your system than trying to tune 8.1. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us