Thomas Lockhart wrote:
> I've got patches to adjust the interpretation of hex literals from an
> integer type (which is how I implemented it years ago to support the
> *syntax*) to a bit string type. I've mentioned this in a previous
> thread, and am following up now.
>
> One point raised previously is that the spec may not be clear about the
> correct type assignment for a hex constant. I believe that the spec is
> clear on this (well, not really, but as clear as SQL99 manages to get ;)
> and that the correct assignment is to bit string (as opposed to a large
> object or some other alternative).
>
> I base this on at least one part of the standard, which is a clause in
> the restrictions on the BIT feature (which we already support):
>
> 31) Specifications for Feature F511, "BIT data type":
> a) Subclause 5.3, "<literal>":
> i) Without Feature F511, "BIT data type", a <general literal>
> shall not be a <bit string literal> or a <hex string
> literal>.
>
> This seems to be a hard linkage of hex strings with the BIT type.
>
> Comments or concerns?
>
My reading of this was that if there are pairs of <hexit>s, then
assignment can be to <hex string literal> *or* <binary string literal>,
but if there are not pairs (i.e. an odd number of <hexit>s) the
interpretaion must be <hex string literal>. I base this on section 5.3
<literal>. Peter was the one who pointed this out earlier.
Can BIT be the default but BYTEA be allowed by explicit cast?
Joe