Thread: Hex literals
I've got patches to adjust the interpretation of hex literals from an integer type (which is how I implemented it years ago to support the *syntax*) to a bit string type. I've mentioned this in a previous thread, and am following up now. One point raised previously is that the spec may not be clear about the correct type assignment for a hex constant. I believe that the spec is clear on this (well, not really, but as clear as SQL99 manages to get ;) and that the correct assignment is to bit string (as opposed to a large object or some other alternative). I base this on at least one part of the standard, which is a clause in the restrictions on the BIT feature (which we already support): 31) Specifications for Feature F511, "BIT data type": a) Subclause 5.3, "<literal>": i) Without Feature F511, "BIT datatype", a <general literal> shall not be a <bit string literal> or a <hex string literal>. This seems to be a hard linkage of hex strings with the BIT type. Comments or concerns? - Thomas
Oh, I've also implemented int8 to/from bit conversions, which was a trivial addition/modification to the int4 support already there... - Thomas
Thomas Lockhart wrote: > I've got patches to adjust the interpretation of hex literals from an > integer type (which is how I implemented it years ago to support the > *syntax*) to a bit string type. I've mentioned this in a previous > thread, and am following up now. > > One point raised previously is that the spec may not be clear about the > correct type assignment for a hex constant. I believe that the spec is > clear on this (well, not really, but as clear as SQL99 manages to get ;) > and that the correct assignment is to bit string (as opposed to a large > object or some other alternative). > > I base this on at least one part of the standard, which is a clause in > the restrictions on the BIT feature (which we already support): > > 31) Specifications for Feature F511, "BIT data type": > a) Subclause 5.3, "<literal>": > i) Without Feature F511, "BIT data type", a <general literal> > shall not be a <bit string literal> or a <hex string > literal>. > > This seems to be a hard linkage of hex strings with the BIT type. > > Comments or concerns? > My reading of this was that if there are pairs of <hexit>s, then assignment can be to <hex string literal> *or* <binary string literal>, but if there are not pairs (i.e. an odd number of <hexit>s) the interpretaion must be <hex string literal>. I base this on section 5.3 <literal>. Peter was the one who pointed this out earlier. Can BIT be the default but BYTEA be allowed by explicit cast? Joe
Thomas Lockhart writes: > 31) Specifications for Feature F511, "BIT data type": > a) Subclause 5.3, "<literal>": > i) Without Feature F511, "BIT data type", a <general literal> > shall not be a <bit string literal> or a <hex string > literal>. > > This seems to be a hard linkage of hex strings with the BIT type. You'll also find in 5.3 Conformance Rule 9) 9) Without Feature T041, "Basic LOB data type support", conforming Core SQL language shall not containany <binary string literal>. which is an equally solid linkage. I might also add that the rules concerning the absence of a feature do not determine what happens in presence of a feature. ;-) Let's think: We could send a formal interpretation request to the standards committee. (They might argue that there is no ambiguity, because the target type is always known.) Or we could check what other database systems do. In any case, I'd rather create a readable syntax for blob'ish types (which the current bytea input format does not qualify for) rather than mapping hexadecimal input to bit types, which is idiosyncratic. -- Peter Eisentraut peter_e@gmx.net