Re: 9.5: Memory-bounded HashAgg - Mailing list pgsql-hackers

From Tom Lane
Subject Re: 9.5: Memory-bounded HashAgg
Date
Msg-id 5306.1408032764@sss.pgh.pa.us
Whole thread Raw
In response to Re: 9.5: Memory-bounded HashAgg  (Jeff Davis <pgsql@j-davis.com>)
Responses Re: 9.5: Memory-bounded HashAgg
List pgsql-hackers
Jeff Davis <pgsql@j-davis.com> writes:
> HashJoin only deals with tuples. With HashAgg, you have to deal with a
> mix of tuples and partially-computed aggregate state values. Not
> impossible, but it is a little more awkward than HashJoin.

Not sure that I follow your point.  You're going to have to deal with that
no matter what, no?

I guess in principle you could avoid the need to dump agg state to disk.
What you'd have to do is write out tuples to temp files even when you
think you've processed them entirely, so that if you later realize you
need to split the current batch, you can recompute the states of the
postponed aggregates from scratch (ie from the input tuples) when you get
around to processing the batch they got moved to.  This would avoid
confronting the how-to-dump-agg-state problem, but it seems to have little
else to recommend it.  Even if splitting a batch is a rare occurrence,
the killer objection here is that even a totally in-memory HashAgg would
have to write all its input to a temp file, on the small chance that it
would exceed work_mem and need to switch to batching.
        regards, tom lane



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: jsonb format is pessimal for toast compression
Next
From: Robert Haas
Date:
Subject: Re: B-Tree support function number 3 (strxfrm() optimization)