One thing that has gotten lost here is whether there is any market at all
for putting in some line of defence against (a tbd degree of) ambiguity at
function creation time to reduce the possible problems (implementation and
user-side) at call time? What do you think?
Tom Lane writes:
> glb(A) is the greatest lower bound *within the set of available
> functions*.
Correct.
> Q, the requested call signature, is *not* in that set
Correct.
> The fact that the set of available functions forms a lattice gives you
> no guarantee whatever that glb(A) >= Q, because Q is not constrained
> by the lattice property.
I know. I don't use the lattice property to deduce that fact hat
glb(A)>=Q. I use the lattice property to derive the existance of glb(A).
The result glb(A)>=Q comes from
1. Q is a lower bound on A (by definition of A)
2. glb(A) is a lower bound on A (by definition of glb)
3. glb(A)>=Q (by definiton of "greatest")
Recall that A was defined as the set of functions >=Q in Q's equivalence
class, and was guaranteed to be non-empty by treating the other cases
separately.
I think it works. :) In all but the most complicated cases this really
decays to the obvious behaviour, but on the other hand it scales
infinitely.
--
Peter Eisentraut Sernanders väg 10:115
peter_e@gmx.net 75262 Uppsala
http://yi.org/peter-e/ Sweden