Mark Dilger wrote:
> I've been thinking about this more, and now I don't see why this is an
> issue. When the planner estimates how many rows will be returned from a
> subquery that is being used within a join, it can't know which
> "parameters" to use either. (Parameters being whatever conditions the
> subquery will pivot upon which are the result of some other part of the
> execution of the full query.) So it seems to me that function S() is at
> no more of a disadvantage than the planner.
>
> If I defined a function S(a integer, b integer) which provides an
> estimate for the function F(a integer, b integer), then S(null, null)
> could be called when the planner can't know what a and b are. S could
> then still make use of the table statistics to provide some sort of
> estimate. Of course, this would mean that functions S() cannot be
> defined strict.
Ok, null probably isn't a good value. F(null, null) could be the call being
made, so S(null, null) would mean "F is being passed nulls" rather than "We
don't know what F's arguments are yet". The returned estimate might be quite
different for these two cases. You could have:
F(a integer, b integer) S(a integer, a_is_known boolean, b integer, b_is_known boolean)
But I'm not fond of the verbosity of doubling the argument list. Since some
arguments might be known while others still are not, I don't think a single
boolean argument all_arguments_are_known is sufficient.