On Tue, Jun 21, 2016 at 11:17:19AM -0400, Robert Haas wrote:
> If the index scans are parameterized by values from the seq scan,
> which is likely the situation in which this sort of plan will be
> generated, we'll pay the extra cost of building the hash table once
> per row in something_big.
>
> I think we should consider switching from a nested loop to a hash join
> on the fly if the outer relation turns out to be bigger than expected.
> We could work out during planning what the expected breakeven point
> is; if the actual outer row count passes that, switch to a hash join.
> This has been discussed before, but nobody's tried to do the work,
> AFAIK.
Yes, the idea of either adjusting the execution plan when counts are
inaccurate, or feeding information about misestimation back to the
optimizer for future queries is something I hope we try someday.
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +