Re: PostgreSQL 8.0.6 crash - Mailing list pgsql-hackers

From Tom Lane
Subject Re: PostgreSQL 8.0.6 crash
Date
Msg-id 6332.1139532263@sss.pgh.pa.us
Whole thread Raw
In response to Re: PostgreSQL 8.0.6 crash  (Rick Gigger <rick@alpinenetworking.com>)
Responses Re: PostgreSQL 8.0.6 crash
List pgsql-hackers
Rick Gigger <rick@alpinenetworking.com> writes:
> However if hashagg truly does not obey the limit that is supposed to  
> be imposed by work_mem then it really ought to be documented.  Is  
> there a misunderstanding here and it really does obey it?  Or is  
> hashagg an exception but the other work_mem associated operations  
> work fine?  Or is it possible for them all to go out of bounds?

hashagg is the exception.  It should be fixed, not documented, but no
one's got round to that.

One point to consider is that if the planner's estimate is as far off
as exhibited in the OP's example, a hashagg that does spill to disk
is likely to take so long that he'll be back here complaining that
the query never terminates ;-).  In most practical situations, I think
exceeding work_mem is really the best solution, as long as it's not
by more than 10x or 100x.  It's when the estimate is off by many
orders of magnitude that you've got a problem.  Running out of memory
is not necessarily the worst response ... as long as the system doesn't
kill the process in response to that.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Rick Gigger
Date:
Subject: Re: PostgreSQL 8.0.6 crash
Next
From: Christopher Kings-Lynne
Date:
Subject: Re: Feature request - Add microsecond as a time unit for