On 2013-09-13 11:27:03 -0500, Merlin Moncure wrote:
> On Fri, Sep 13, 2013 at 11:07 AM, Andres Freund <andres@2ndquadrant.com> wrote:
> > On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
> >> The stock documentation advice I probably needs to be revised to so
> >> that's the lesser of 2GB and 25%.
> >
> > I think that would be a pretty bad idea. There are lots of workloads
> > where people have postgres happily chugging along with s_b lots bigger
> > than that and see benefits.
> > We have a couple people reporting mostly undiagnosed (because that turns
> > out to be hard!) problems that seem to be avoided with smaller s_b. We
> > don't even remotely know enough about the problem to make such general
> > recommendations.
> I happen to be one of those "couple" people. Load goes from 0.1 to
> 500 without warning then back to 0.1 equally without warning.
> Unfortunately the server is in a different jurisdiction such that it
> makes deep forensic analysis impossible. I think this is happening
> more and more often as postgres is becoming increasingly deployed on
> high(er) -end servers. I've personally (alone) dealt with 4-5
> confirmed cases and there have been many more. We have a problem.
Absolutely not claiming the contrary. I think it sucks that we couldn't
fully figure out what's happening in detail. I'd love to get my hand on
a setup where it can be reliably reproduced.
> But, to address your point, the "big s_b" benefits are equally hard to
> quantify (unless your database happens to fit in s_b)
Databases where the hot dataset fits in s_b is pretty honking big use
case tho. That's one of the primary reasons to buy machines with
craploads of memory.
That said, I think having a note in the docs that large s_b can cause
such a problem might not be a bad idea and I surely wouldn't argue
against it.
Greetings,
Andres Freund
-- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services