Re: Auto-tuning work_mem and maintenance_work_mem - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Auto-tuning work_mem and maintenance_work_mem
Date
Msg-id CA+TgmoYzKR=GKTNF=YR+sAY=tVfGyu96TLQCrRQiOb7Ri=SM+g@mail.gmail.com
Whole thread Raw
In response to Re: Auto-tuning work_mem and maintenance_work_mem  (Andres Freund <andres@2ndquadrant.com>)
Responses Re: Auto-tuning work_mem and maintenance_work_mem  (Andres Freund <andres@2ndquadrant.com>)
List pgsql-hackers
On Mon, Feb 17, 2014 at 11:19 AM, Andres Freund <andres@2ndquadrant.com> wrote:
> On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
>> I don't think anyone objected to increasing the defaults for work_mem
>> and maintenance_work_mem by 4x, and a number of people were in favor,
>> so I think we should go ahead and do that.  If you'd like to do the
>> honors, by all means!
>
> Actually, I object to increasing work_mem by default. In my experience
> most of the untuned servers are backing some kind of web application and
> often run with far too many connections. Increasing work_mem for those
> is dangerous.

I think you may be out-voted.  An awful lot of people have voiced
support for the idea of raising this value, and there is no rule that
our default should be the smallest value that anyone will ever find
useful.  We do tend to err on the side of conservatism and aim for a
relatively low-end machine, and I agree with that policy, but there is
such a thing as going overboard.  With the proposed defaults, a user
with one sort or hash in every session, each of which uses the
entirety of work_mem, is on the hook for 400MB.  If you're trying to
handle 100 connections on a machine that does not have 400MB of
working memory available, you are probably in for a bad time of it.

Now, if you're saying that people raise max_connections to say 1000
*and do nothing else* perhaps that makes the argument more plausible.
But I don't think it makes it very much more plausible.  Even a
high-end system is likely to deliver terrible performance if the user
has 1000 simultaneously-active connections; one with only a few GB of
memory is going to be crushed like a bug.

I'll note that in 9.3, we quadrupled the default size of
shared_buffers when we got out from under the POSIX shared memory
limits and AFAIK we've had zero complaints about that.  It is entirely
possible, even likely, that there is a machine out there somewhere for
which the old value of 32MB is preferable, and those people can
configure a smaller value.  But that's not typical.  And neither do I
believe that the typical PostgreSQL user wants a 2MB sort to spill to
disk.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Draft release notes up for review
Next
From: Tom Lane
Date:
Subject: Re: Ctrl+C from sh can shut down daemonized PostgreSQL cluster