Re: Large Scale Aggregation (HashAgg Enhancement) - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Large Scale Aggregation (HashAgg Enhancement)
Date
Msg-id 9602.1137432979@sss.pgh.pa.us
Whole thread Raw
In response to Re: Large Scale Aggregation (HashAgg Enhancement)  (Simon Riggs <simon@2ndquadrant.com>)
Responses Re: Large Scale Aggregation (HashAgg Enhancement)  (Simon Riggs <simon@2ndquadrant.com>)
Re: Large Scale Aggregation (HashAgg Enhancement)  (Simon Riggs <simon@2ndquadrant.com>)
List pgsql-hackers
Simon Riggs <simon@2ndquadrant.com> writes:
> On Mon, 2006-01-16 at 00:07 -0500, Rod Taylor wrote:
>> A couple of days ago I found myself wanting to aggregate 3 Billion
>> tuples down to 100 Million tuples based on an integer key with six
>> integer values -- six sum()'s.

> There is already hash table overflow (spill to disk) logic in HashJoins,
> so this should be possible by reusing that code for HashAggs. That's on
> my todo list, but I'd welcome any assistance.

Yeah, I proposed something similar awhile back in conjunction with
fixing the spill logic for hash joins (which was always there, but was
not adaptive until recently).  I got the join part done but got
distracted before fixing HashAgg :-(

In principle, you just reduce the range of currently-in-memory hash
codes whenever you run low on memory.  The so-far-accumulated working
state for aggregates that are not in the range anymore goes into a temp
file, and subsequently any incoming tuples that hash outside the range
go into another temp file.  After you've completed the scan, you
finalize and emit the aggregates that are still in memory, then you pick
up the first set of dropped aggregates, rescan the associated "TODO"
file of unprocessed tuples, lather rinse repeat till done.

The tricky part is to preserve the existing guarantee that tuples are
merged into their aggregate in arrival order.  (This does not matter for
the standard aggregates but it definitely does for custom aggregates,
and there will be unhappy villagers appearing on our doorsteps if we
break it.)  I think this can work correctly under the above sketch but
it needs to be verified.  It might require different handling of the
TODO files than what hashjoin does.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: ScanKey representation for RowCompare index conditions
Next
From: Tom Lane
Date:
Subject: Re: [GENERAL] [PATCH] Better way to check for getaddrinfo function.