Re: Hash aggregates blowing out memory - Mailing list pgsql-general

From Mike Harding
Subject Re: Hash aggregates blowing out memory
Date
Msg-id 1109369054.86993.17.camel@bsd.mvh
Whole thread Raw
In response to Re: Hash aggregates blowing out memory  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Hash aggregates blowing out memory  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Any way to adjust n_distinct to be more accurate?

I don't think a 'disk spill' would be that bad, if you could re-sort the
hash in place.  If nothing else, if it could -fail- when it reaches the
lower stratosphere, and re-start, it's faster than getting no result at
all... sort of an auto disable of the hashagg.

On Fri, 2005-02-25 at 16:55 -0500, Tom Lane wrote:
> Mike Harding <mvh@ix.netcom.com> writes:
> > I've been having problems where a HashAggregate is used because of a bad
> > estimate of the distinct number of elements involved.
>
> If you're desperate, there's always enable_hashagg.  Or reduce sort_mem
> enough so that even the misestimate looks like it will exceed sort_mem.
>
> In the long run it would be nice if HashAgg could spill to disk.  We
> were expecting to see a contribution of code along that line last year
> (from the CMU/Berkeley database class) but it never showed up.  The
> performance implications might be a bit grim anyway :-(
>
>             regards, tom lane
--
Mike Harding <mvh@ix.netcom.com>


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Hash aggregates blowing out memory
Next
From: Tom Lane
Date:
Subject: Re: Hash aggregates blowing out memory