On Thu, Mar 22, 2012 at 10:02 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Mar 22, 2012 at 8:46 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
>> large result sets) or cached structures like plpgsql plans. Once you
>> go over 50% memory into shared, it's pretty easy to overcommit your
>> server and burn yourself. Of course, 50% of 256GB server is a very
>> different animal than 50% of a 4GB server.
>
> There's other issues you run into with large shared_buffers as well.
> If you've got a large shared_buffers setting, but only regularly hit a
> small subset of your db (say 32GB shared_buffers but only hit 4G or so
> regularly in your app) then it's quite possible that older
> shared_buffer segments will get swapped out because they're not being
> used. Then, when the db goes to hit a page in shared_buffers, the OS
> will have to swap it back in. What was supposed to make your db much
> faster has now made it much slower.
>
> With Linux, the OS tends to swap out unused memory to make room for
> file buffers. While you can change the swappiness settings to 0 to
> slow it down, the OS will eventually swap out the least used segments
> anyway. The only solution on large memory servers is often to just
> turn off swap.
Right -- but my take on that is that hacking the o/s to disable swap
is dealing with symptoms of problem related to server
misconfiguration.
In particular it probably means shared_buffers is set too high...the
o/s thinks it needs that memory more than you do and it may very well
be right. The o/s doesn't swap for fun -- it does so when there are
memory pressures and things are under stress. Generally, unused
memory *should* get swapped out...of course there exceptions for
example if you want zero latency access to an important table that is
only touched once a day. But those cases are pretty rare. On systems
with very fast storage (ssd), removing swap is even more unreasonable
-- the penalty for going to storage is less and the server could use
that memory for other things.
merlin