On Thu, 24 Apr 2003, Stephan Szabo wrote:
>
> On Thu, 24 Apr 2003, Robert Dyas wrote:
>
> > Actually, what I'm suggesting first is much more straight forward. Each
> > operation must take place on a row set. When you've got a bunch of tables
> > that are joined together, and columns with unique indexes are specified in
> > the where clause to be equal to some simple value (regalrdless of which
> > table), then before you do any other processing, make sure you only process
> > on that single row! Don't join a million rows together only to throw them
> > all out but one!! Which it appears is exactly what 7.3.1 was doing. It
>
> It doesn't look that way to me. It looks to me from the explain output
> that it was doing an index scan getting the one row from the table with
> the condition and then joining that with each successive table (possibly
> getting additional rows from those), then doing a sort and unique for the
> distinct.
As a note, relooking at the explain analyze output, it looked like most of
the time was in the sort and unique for the distinct. I wonder if raising
sort_mem would give a gain. Not sure about that. (At best it'd still be
on the order of 300x the other query rather than 17000x).