Gregory Stark <gsstark@mit.edu> writes:
> On the other there are some common situations where you could see
> atypical increases. Consider joining a bunch of small tables to
> generate a large result set. The small tables are probably all in
> memory and the result set may only have a small number of distinct
> values. If you throw out the duplicates early you save *all* the
> I/O. If you have to do a disk sort it could be many orders slower.
Right, we already have support for doing that well, in the form of
hashed aggregation. What needs to happen is to get that to work for
DISTINCT as well as GROUP BY. IIRC, DISTINCT is currently rather
thoroughly intertwined with ORDER BY, and we'd have to figure out
some way to decouple them --- without breaking DISTINCT ON, which
makes it a lot harder :-(
regards, tom lane