Re: shared_buffers advice - Mailing list pgsql-performance

From Greg Smith
Subject Re: shared_buffers advice
Date
Msg-id 4B98AC5F.3030507@2ndquadrant.com
Whole thread Raw
In response to shared_buffers advice  (Paul McGarry <paul.mcgarry@gmail.com>)
Responses Re: shared_buffers advice
List pgsql-performance
Paul McGarry wrote:
> IE when Postgres reads something from disk it will go into both the OS
> page cache and the Postgresql shared_buffers and the OS page cache
> copy is unlikely to be useful for anything.
>

That's correct.  However, what should happen over time is that the
popular blocks in PostgreSQL's buffer cache, the hot ones that are used
all the time for things like index blocks, will stay in the PG buffer
cache, while being evicted from the OS.  And now you've got a win over
the situation where you'd have used a smaller buffer cache.  A typical
OS buffering scheme will not quite be smart enough to prioritize those
blocks over the rest so that they are likely to stay there.

So for any given system, the question is whether the gain in performance
from buffers that get a high usage count and stay there, something you
only get from the PG buffer cache, outweighs the overhead of the
double-buffering that shows up in order to reach that state.  If you
oversize the buffer cache, and look inside it with pg_buffercache
considering the usage count distribution, you can actually estimate how
likely that is to be true.


> If that is the case what are the downsides to having less overlap
> between the caches, IE heavily favouring one or the other, such as
> allocating shared_buffers to a much larger percentage (such as 90-95%
> of expected 'free' memory).
>

Giving all the buffers to the database doesn't work for many reasons:
-Need a bunch leftover for clients to use (i.e. work_mem)
-Won't be enough OS cache for non-buffer data the database expects
cached reads and writes will perform well onto (some of the non-database
files it uses)
-Database checkpoints will turn into a nightmare, because there will be
so much more dirty data that could have been spooled regularly out to
the OS and then to disk by backends that doesn't ever happen.
-Not having enough writes for buffering backend writes means less chanes
to do write combining and elevator seek sorting, which means average I/O
will drop.

The alternate idea is to make shared_buffers small.  I see people
happilly running away in the 128MB - 256MB range sometimes.  The benefit
over just using the default of <32MB is obvious, but you're already past
a good bit of the diminishing marginal returns just by the 8X increase.

Improves keep coming as shared_buffers cache size increases for many
workloads, but eventually you can expect to go to far if you try to push
everything in there.  Only question is whether that happens at 40%, 60%,
or something higher.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


pgsql-performance by date:

Previous
From: Angayarkanni
Date:
Subject: Re: How to SELECT
Next
From: "elias ghanem"
Date:
Subject: Is DBLINK transactional