> The code asserts that SQL99 requires the default precision to be zero,
> but I do not agree with that reading. What I find is in 6.1:
> 30) If <time precision> is not specified, then 0 (zero) is implicit.
> If <timestamp precision> is not specified, then 6 is implicit.
> so at the very least you'd need two different settings for TIME and
> TIMESTAMP. But we don't enforce the spec's idea of default precision
> for char, varchar, or numeric, so why start doing so with timestamp?
Sure, I'd forgotten about the 6 vs 0 differences. Easy to put back in.
One of course might wonder why the spec *makes* them different.
"Why start doing so with timestamp?". SQL99 compliance for one thing ;)
I'm not sure I'm comfortable with the spec behavior, but without a
discussion I wasn't comfortable implementing it another way.
> Essentially, what I want is for gram.y to set typmod to -1 when it
> doesn't see a "(N)" decoration on TIME/TIMESTAMP. I think everything
> works correctly after that.
"... works correctly..." == "... works the way we'd like...". Right?
This is the start of the discussion I suppose. And I *expected* a
discussion like this, since SQL99 seems a bit ill-tempered on this
precision business. We shouldn't settle on a solution with just two of
us, and I guess I'd like to hear from folks who have applications (the
larger the better) who would care about this. Even better if their app
had been running on some *other* DBMS. Anyone?
- Thomas