On Mon, Jun 14, 2010 at 1:36 AM, AI Rumman <rummandba@gmail.com> wrote:
> I have a server with Dual-Core 4 cpu and 32 GB RAM.
> This is the database server. Only Postgresql 8.1 is running on it with
> multiple databases.
>
> How should I plan for shared_buffers and effective cache size?
> Any idea please.
Well, that really depends on what you're doing with this machine. If
you're maintaining a small, high turn over db (say < 1Gig) then
setting shared_buffers real high won't help at all, and may hurt
performance. Your OS matters. Most observed behaviour on windows
says that having a large shared_buffers doesn't help and may in fact
hurt performance, no matter how big your dataset.
The general rule of thumb I use is to model how much data you've got
being operated on at a time, in memory, and make sure it's bigger than
that, if you can. I.e. if we'd have say 1G of data being operated on
at once, then I'd want 2G to 4G of shared_buffers so I'd be sure it
always fit into ram and that interleaved accesses won't bump each
other out of the shared_buffers. Note that this is on a database much
bigger than 1G itself, it's just that all the various live connections
at any one time into it might be operating on about 1G at once.
Note that on another server that handles sessions, the shared_buffers
are something like 512Megs. No fsync, as they're completely
replaceable at any time with a config change / auto failover. They
need to make updates fast, every time. And they get a lot of updates
on the same records all the time, but they're all small records, in
the 1K to 2K range. Having a large shared_buffers here just makes
those machines slower. And with fsync off the major limit on commit
speed was the size of shared_buffers to keep bg writing.
So, what are you doing with your servers?