Thread: pgtune and massive shared_buffers recommendation

pgtune and massive shared_buffers recommendation

From
Stuart Bishop
Date:
Hi.

I've got some boxes with 128GB of RAM and up to 750 connections, just
upgraded to 9.3 so I'm revising my tuning. I'm getting a
recommendation from pgtune to bump my shared_buffers up to 30GB and
work_mem to 80MB. Is a shared_buffers this high now sane?

The PostgreSQL reference doesn't make recommendations on limits, but
it didn't either with earlier versions of PostgreSQL where more than a
few GB was normally a bad thing to do. The most recent blob posts I
see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB.

(and yes, I am using pgbouncer but stuck in session mode and up to 750
connections for the time being)

--
Stuart Bishop <stuart@stuartbishop.net>
http://www.stuartbishop.net/


Re: pgtune and massive shared_buffers recommendation

From
Bill Moran
Date:
On Wed, 21 May 2014 21:39:05 +0700 Stuart Bishop <stuart@stuartbishop.net> wrote:
>
> I've got some boxes with 128GB of RAM and up to 750 connections, just
> upgraded to 9.3 so I'm revising my tuning. I'm getting a
> recommendation from pgtune to bump my shared_buffers up to 30GB and
> work_mem to 80MB. Is a shared_buffers this high now sane?
>
> The PostgreSQL reference doesn't make recommendations on limits, but
> it didn't either with earlier versions of PostgreSQL where more than a
> few GB was normally a bad thing to do. The most recent blob posts I
> see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB.
>
> (and yes, I am using pgbouncer but stuck in session mode and up to 750
> connections for the time being)

My experience with a busy database server over the last year or so demonstrated
that values much _higher_ than that result in occasional stalls on the part of
PostgreSQL.  My guess is that the code that manages shared_buffers doesn't
scale effectively to 64G (which is where we saw the problem) and would
occasionally stall waiting for some part of the code to rearrange some
memory, or write it to disk, or something else.  Other tuning attempts did
not alleviate the problem (such as tweaking various checkpoint settings) but
the problem completely disappeared when we lower shared_buffers to (I think)
32G.

Unfortunatley, I don't have access to exact details because I no longer work
at that job, so I'm just pulling from memory.

We never did get an opportunity to test whether there was any performance
change from 64G -> 32G.  I can tell you that if performance decreased, it
didn't decrease enough for it to be noticable from the application.

So my advice is that 30G might be just fine for shared_buffers, but if you
experience stalls (i.e., the database stops responding for an uncomfortably
long time) keep that in mind and lower it to see if it fixes the stalls.
Another important data point when considering this: we never experienced
any crashes or errors with shared_buffers set at 64G ... just the stalls, so
setting it too high appears to endanger performance, but nothing else.

A bit of advice coming from the other direction: shared_buffers doesn't really
need to be any larger than the working set of your data.  If you can estimate
that, and (for example) it's only 4G, you don't need to set shared_buffers
nearly that high, even if you have 4T of total data.  Of course, estimating
your working set can be difficult, but it's worth a look.

--
Bill Moran <wmoran@potentialtech.com>


Re: pgtune and massive shared_buffers recommendation

From
Shaun Thomas
Date:
On 05/21/2014 09:39 AM, Stuart Bishop wrote:

> I've got some boxes with 128GB of RAM and up to 750 connections, just
> upgraded to 9.3 so I'm revising my tuning. I'm getting a
> recommendation from pgtune to bump my shared_buffers up to 30GB and
> work_mem to 80MB. Is a shared_buffers this high now sane?

The pgtune utility from Greg Smith, so it can generally be considered
sane. However it's based very much on "rule of thumb" calculations that
aren't really valid for recent hardware. It was designed back in the
days when a 32GB machine was "big". It has since been updated, but
you'll notice it hasn't seen a significant update since PostgreSQL 9.1.

It basically boils down to this:

1. The previous rule of "8GB is the max" was simply a limitation.
2. With the limitation removed, people found greater values didn't
necessarily increase performance.
3. Larger values can be dangerous due to increased demand on
checkpoints, and more potential for dirty memory to cause write IO storms.
4. The OS is usually much better about managing cached pages.
5. PG can not protect itself from double-buffering currently, so every
block in shared_buffers may *also* be present in the OS page cache.

Given all of that, I could not personally recommend anything over 8GB
for all but the edgiest of edge-cases. The only real way to tell for
sure is with simulated and real-world tests. I'd start at 4GB, and try
adding 1GB at a time until benchmarks stop increasing with your working
data size. I suspect your results will top out before you even hit 8GB
anyway.

--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@optionshouse.com

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email