> > Implement precision for the INTERVAL() type.
> > Use the typmod mechanism for both of INTERVAL features.
> If I could figure out what the typmod of an interval type is defined
> to be, I'd fix format_type() to display the type name properly so that
> pg_dump would do the right thing. But it doesn't seem very well
> documented as to what the valid values are...
I tried to follow what seemed to be the conventions of the numeric data
type in putting the "precision" in the low 16 bits. 0xFFFF implies
"unspecified precision". I reused some existing mask definitions for the
fields within an interval, and plopped those into the high 16 bits, with
0xFFFF << 16 implying that all fields are allowed. So "typmod = -1"
implies behavior compatible with the existing/former feature set.
Not sure *where* this should be documented, since it is used in more
than one place. Suggestions?
> ERROR: AdjustIntervalForTypmod(): internal coding error
Oops. You found the problem spot; I've got patches...
> Also, you're going to have some problems with your plan to make
> 0xFFFF in the high bits mean "no range, but maybe a precision",
> because there are a number of places that think that any typmod < 0
> is a dummy. I would strongly suggest that you arrange the coding
> of interval's typmod to follow that convention, rather than assume
> you can ignore it. Perhaps use 0x7FFF (or zero...) to mean "no range",
> and make sure none of the bits that are used are the sign bit?
What exactly does "is a dummy" mean? (outside of possible personal
opinions ;) Are there places which decline to call a "normalization
routine" if typmod is less than zero, rather than equal to -1? I didn't
notice an effect such as that in my (limited) testing.
btw, in changing the convention to use 0x7FFF rather than 0xFFFF, I
found another bug, where I transposed the two subfields for one case in
gram.y. Will also be fixed.
- Thomas