Re: hashjoin chosen over 1000x faster plan - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: hashjoin chosen over 1000x faster plan
Date
Msg-id 470CD3E3.EE98.0025.0@wicourts.gov
Whole thread Raw
In response to Re: hashjoin chosen over 1000x faster plan  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: hashjoin chosen over 1000x faster plan  (Simon Riggs <simon@2ndquadrant.com>)
List pgsql-performance
>>> On Wed, Oct 10, 2007 at  1:07 PM, in message <20980.1192039650@sss.pgh.pa.us>,
Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
>> Basically the planner doesn't ever optimise for the possibility of the
>> never-executed case because even a single row returned would destroy
>> that assumption.
>
> It's worse than that: the outer subplan *does* return some rows.
> I suppose that all of them had NULLs in the join keys, which means
> that (since 8.1 or so) nodeMergejoin discards them as unmatchable.
> Had even one been non-NULL the expensive subplan would have been run.

Well, this query is run tens of thousands of times per day by our web
application; less than one percent of those runs would require the
subplan.  (In my initial post I showed counts to demonstrate that 1%
of the rows had a non-NULL value and, while I wouldn't expect the
planner to know this, these tend to be clustered on a lower
percentage of cases.)  If the philosophy of the planner is to go for
the lowest average cost (versus lowest worst case cost) shouldn't it
use the statistics for to look at the percentage of NULLs?

-Kevin




pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: hashjoin chosen over 1000x faster plan
Next
From: Simon Riggs
Date:
Subject: Re: hashjoin chosen over 1000x faster plan