I've got patches to adjust the interpretation of hex literals from an
integer type (which is how I implemented it years ago to support the
*syntax*) to a bit string type. I've mentioned this in a previous
thread, and am following up now.
One point raised previously is that the spec may not be clear about the
correct type assignment for a hex constant. I believe that the spec is
clear on this (well, not really, but as clear as SQL99 manages to get ;)
and that the correct assignment is to bit string (as opposed to a large
object or some other alternative).
I base this on at least one part of the standard, which is a clause in
the restrictions on the BIT feature (which we already support):
31) Specifications for Feature F511, "BIT data type": a) Subclause 5.3, "<literal>": i) Without Feature F511, "BIT
datatype", a <general literal> shall not be a <bit string literal> or a <hex string literal>.
This seems to be a hard linkage of hex strings with the BIT type.
Comments or concerns?
- Thomas