"Roberto Cornacchia" <rob.c@virgilio.it> writes:
>> Why not? The executor *will* stop when it has as many output rows as
>> the LIMIT demands.
> Yes, but consider this very simple case:
> LIMIT 10
> [something else]
> MergeJoin (100000 rows)
> Sort (100000 rows)
> SeqScan on Table1 (100000 rows)
> IndexScan on Table2 (100 rows)
> Assuming that referential constraints allow us to do it, we would do the
> following:
> [something else]
> MergeJoin (10 rows)
> SortStop 10 (10 rows)
> SeqScan on Table1 (100000 rows)
> IndexScan on Table2 (100 rows)
> Here, we get only 10 rows from the outer relation. *In general*, this is
> NOT correct, but referential constraints make it safe in many cases. You
> can see that in the second approach, the "[something else]" will operate
> with an input stream cardinality of 10, against 100000 of the first
> approach. This is what we call the "push-down" of the Stop operator.
If I understand your point correctly, the existing code arrives at this
same effect through another direction: it will choose the right plan for
the query when the [something else] node doesn't need to read very many
rows. This isn't reflected in the EXPLAIN output very well, which might
be fooling you as to what's really happening.
I'm not sure about your comment about referential constraints. If you
are doing analysis of restriction clauses to prove that a particular
stage doesn't require reading as many rows as it otherwise would, then
you've done more than I have.
regards, tom lane