I wrote:
> Why do these use anynonarray rather than anyelement? Given that we
> support ranges of arrays (there's even a regression test), this seems
> a bogus limitation.
After experimenting with changing that, I see why you did it: some of
the regression tests fail, eg,
SELECT * FROM array_index_op_test WHERE i <@ '{38,34,32,89}' ORDER BY seqno; ERROR: operator is not unique: integer[]
<@unknown
That is, if we have both anyarray <@ anyarray and anyelement <@ anyrange
operators, the parser is unable to decide which one is a better match to
integer[] <@ unknown. However, restricting <@ to not work for ranges
over arrays is a pretty horrid fix for that, because there is simply not
any access to the lost functionality. It'd be better IMO to fail here
and require the unknown literal to be cast explicitly than to do this.
But what surprises me about this example is that I'd have expected the
heuristic "assume the unknown is of the same type as the other input"
to resolve it. Looking more closely, I see that we apply that heuristic
in such a way that it works only for exact operator matches, not for
matches requiring coercion (including polymorphic-type matches). This
seems a bit weird. I propose adding a step to func_select_candidate
that tries to resolve things that way, ie, if all the known-type inputs
have the same type, then try assuming that the unknown-type ones are of
that type, and see if that leads to a unique match. There actually is a
comment in there that claims we do that, but the code it's attached to
is really doing something else that involves preferred types within
type categories...
Thoughts?
regards, tom lane