Re: Increasing GROUP BY CHAR columns speed - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Increasing GROUP BY CHAR columns speed
Date
Msg-id dcc563d10811281234gd5cdbc7v4b7c708dcf3babd8@mail.gmail.com
Whole thread Raw
In response to Re: Increasing GROUP BY CHAR columns speed  (Scott Carey <scott@richrelevance.com>)
List pgsql-performance
On Fri, Nov 28, 2008 at 12:51 PM, Scott Carey <scott@richrelevance.com> wrote:
> I'm not sure that postgres allocates the whole work_mem each time, and in any event, the OS won't try and allocate to
RAMuntil the page is actually used.  My experience is that oversized work_mem doesn't hurt until it is actually used.
Else,my configuration with 1000 concurrent connections and work_mem = 1GB would have blown up.  I don't have that much
RAM+ SWAP * overcommit.  Of the 1000 connections, only 8 run queries that would ever need more than 2 or 3 MB of space
toexecute.  Of course, one has to be certain what the client connections do for it to be very over sized, so I would
notrecommend the above in general. 

That's kinda like saying I have a revolver with 100 chambers and only
one bullet, and it seems perfectly safe to put it to my head and keep
pulling the trigger.

Of course pg doesn't allocate the whole amount every time.  It
allocates what it needs, up to the max you allow.  by setting it to 1G
it's quite possible that eventually enough queries will be running
that need a fair bit of work_mem and suddenly bang, your server is in
a swap storm and goes non-responsive.

It's far better to set it to something reasonable, like 4 or 8 Meg,
then for the individual queries that need more set it higher at run
time.

pgsql-performance by date:

Previous
From: "Andrus"
Date:
Subject: Re: Increasing GROUP BY CHAR columns speed
Next
From: "Andrus"
Date:
Subject: Re: Increasing GROUP BY CHAR columns speed