It seems to me that when there is no explicit precision notation
attached, a time/timestamp datatype should not force a precision of
zero, but should accept whatever it's given. This is analogous to
the way we do char, varchar, and numeric: there's no length limit
if you don't specify one. For example, I think this result is quite
unintuitive:
regression=# select '2001-10-04 13:52:42.845985-04'::timestamp; timestamptz
------------------------2001-10-04 13:52:43-04
(1 row)
Throwing away the clearly stated precision of the literal doesn't
seem like the right behavior to me.
The code asserts that SQL99 requires the default precision to be zero,
but I do not agree with that reading. What I find is in 6.1:
30) If <time precision> is not specified, then 0 (zero) is implicit. If <timestamp precision> is not
specified,then 6 is implicit.
so at the very least you'd need two different settings for TIME and
TIMESTAMP. But we don't enforce the spec's idea of default precision
for char, varchar, or numeric, so why start doing so with timestamp?
Essentially, what I want is for gram.y to set typmod to -1 when it
doesn't see a "(N)" decoration on TIME/TIMESTAMP. I think everything
works correctly after that.
regards, tom lane