Thread: Unhappiness with forced precision conversion for timestamp

Unhappiness with forced precision conversion for timestamp

From
Tom Lane
Date:
It seems to me that when there is no explicit precision notation
attached, a time/timestamp datatype should not force a precision of
zero, but should accept whatever it's given.  This is analogous to
the way we do char, varchar, and numeric: there's no length limit
if you don't specify one.  For example, I think this result is quite
unintuitive:

regression=# select '2001-10-04 13:52:42.845985-04'::timestamp;     timestamptz
------------------------2001-10-04 13:52:43-04
(1 row)

Throwing away the clearly stated precision of the literal doesn't
seem like the right behavior to me.

The code asserts that SQL99 requires the default precision to be zero,
but I do not agree with that reading.  What I find is in 6.1:
       30) If <time precision> is not specified, then 0 (zero) is implicit.           If <timestamp precision> is not
specified,then 6 is implicit.
 

so at the very least you'd need two different settings for TIME and
TIMESTAMP.  But we don't enforce the spec's idea of default precision
for char, varchar, or numeric, so why start doing so with timestamp?

Essentially, what I want is for gram.y to set typmod to -1 when it
doesn't see a "(N)" decoration on TIME/TIMESTAMP.  I think everything
works correctly after that.
        regards, tom lane


Re: Unhappiness with forced precision conversion for timestamp

From
Thomas Lockhart
Date:
> The code asserts that SQL99 requires the default precision to be zero,
> but I do not agree with that reading.  What I find is in 6.1:
>         30) If <time precision> is not specified, then 0 (zero) is implicit.
>             If <timestamp precision> is not specified, then 6 is implicit.
> so at the very least you'd need two different settings for TIME and
> TIMESTAMP.  But we don't enforce the spec's idea of default precision
> for char, varchar, or numeric, so why start doing so with timestamp?

Sure, I'd forgotten about the 6 vs 0 differences. Easy to put back in.
One of course might wonder why the spec *makes* them different.

"Why start doing so with timestamp?". SQL99 compliance for one thing ;)

I'm not sure I'm comfortable with the spec behavior, but without a
discussion I wasn't comfortable implementing it another way.

> Essentially, what I want is for gram.y to set typmod to -1 when it
> doesn't see a "(N)" decoration on TIME/TIMESTAMP.  I think everything
> works correctly after that.

"... works correctly..." == "... works the way we'd like...". Right?

This is the start of the discussion I suppose. And I *expected* a
discussion like this, since SQL99 seems a bit ill-tempered on this
precision business. We shouldn't settle on a solution with just two of
us, and I guess I'd like to hear from folks who have applications (the
larger the better) who would care about this. Even better if their app
had been running on some *other* DBMS. Anyone?
                        - Thomas


Re: Unhappiness with forced precision conversion for timestamp

From
Thomas Lockhart
Date:
> So far, as compared to many other databases, PostgreSQL, remains
> pretty close to the standard (at least for our projects).  The only
> areas that we have had issues with is the default inclusion of the
> timezone information when retriving the timestamp information and the
> slightly non-standard interval literal notation (i.e., including the
> year-month or day-time interval information inside the single quotes
> with the literal string).

You will be able to choose "timestamp without time zone" in the next
release.

> My vote on all datetime questions is to stick strictly to the
> standard.

Hmm. It isn't at all clear that the standards guys were awake or sober
when working on the date/time features. I assume that much of the
cruftiness in the standard is forced by influential contributors who
have an existing database product, but maybe there is some other
explanation of why good folks can get so confused.

otoh, I'm not sure why people nowadays would *not* use time zones in
their applications, since everyone is so much more globally aware and
distributed than in decades past.

> Of course sticking to the standard is not always easy as the standard
> is not always clear or even consistent.  (I'm only familiar with ANSI
> 92 not ANSI 99.) Time zones in particular seem to be problematic.

:-P

Have you actually used ANSI SQL9x time zones? istm that "one offset fits
all" is really ineffective in supporting real applications, but I'd like
to hear about how other folks use it.

> In this case, I believe that it would be preferable to stick with the
> TIME(0) and TIMESTAMP(6) default precision.  In our applications, we
> always specify the precision, so, the default precision is not a real
> concern for us, however, for portability, I still suggest sticking
> with the standard.

We are likely to use the 0/6 convention for the next release (though why
TIME should default to zero decimal places and TIMESTAMP default to
something else makes no sense).
                        - Thomas


Re: Unhappiness with forced precision conversion for

From
Peter Eisentraut
Date:
Tom Lane writes:

> regression=# select '2001-10-04 13:52:42.845985-04'::timestamp;
>       timestamptz
> ------------------------
>  2001-10-04 13:52:43-04
> (1 row)
>
> Throwing away the clearly stated precision of the literal doesn't
> seem like the right behavior to me.

That depends on the exact interpretation of '::'.

Recall that the SQL syntax for a timestamp literal is actually
   TIMESTAMP 'YYYY-MM-DD HH:MM:SS.XXX....'

with the "TIMESTAMP" required.  The rules concerning this are...
       18) The declared type of a <time literal> that does not specify           <time zone interval> is TIME(P)
WITHOUTTIME ZONE, where P is           the number of digits in <seconds fraction>, if specified, and           0 (zero)
otherwise.The declared type of a <time literal> that           specifies <time zone interval> is TIME(P) WITH TIME
ZONE,where           P is the number of digits in <seconds fraction>, if specified,           and 0 (zero) otherwise.
 

which is what you indicated you would expect.

However, if you interpret X::Y as CAST(X AS Y) then the truncation is
entirely correct.

You might expect all of

'2001-10-05 22:41:00'
TIMESTAMP '2001-10-05 22:41:00'
'2001-10-05 22:41:00'::TIMESTAMP
CAST('2001-10-05 22:41:00' AS TIMESTAMP)

to evaluate the same (in an appropriate context), but SQL really defines
all of these to be slightly different (or nothing at all).  This
difference is already reflected in the parser:  The first two are
"constants", the latter two are "type casts".

I think in a consistent extension of the standard, the first two should
take the precision as given, whereas the last two should truncate.

To make the TIMESTAMP in #2 be just a data type vs. meaning TIMESTAMP(0)
in #3 and #4, the grammar rules would have to be beaten around a little,
but it seems doable.

-- 
Peter Eisentraut   peter_e@gmx.net   http://funkturm.homeip.net/~peter



Re: Unhappiness with forced precision conversion

From
F Harvell
Date:
We use timestamps and intervals quite a bit in our applications.  We
also use several different databases.  Unfortunately, the time/date/
interval area is one that is not at all consistent between databases.
It makes life particularly difficult when trying to re-use application
code.

So far, as compared to many other databases, PostgreSQL, remains
pretty close to the standard (at least for our projects).  The only
areas that we have had issues with is the default inclusion of the
timezone information when retriving the timestamp information and the
slightly non-standard interval literal notation (i.e., including the
year-month or day-time interval information inside the single quotes
with the literal string).

My vote on all datetime questions is to stick strictly to the
standard.

Of course sticking to the standard is not always easy as the standard
is not always clear or even consistent.  (I'm only familiar with ANSI
92 not ANSI 99.) Time zones in particular seem to be problematic.

In this case, I believe that it would be preferable to stick with the
TIME(0) and TIMESTAMP(6) default precision.  In our applications, we
always specify the precision, so, the default precision is not a real
concern for us, however, for portability, I still suggest sticking
with the standard.

Thanks,
F Harvell


On Thu, 04 Oct 2001 20:30:24 -0000, Thomas Lockhart wrote:
> > The code asserts that SQL99 requires the default precision to be zero,
> > but I do not agree with that reading.  What I find is in 6.1:
> >         30) If <time precision> is not specified, then 0 (zero) is implicit.
> >             If <timestamp precision> is not specified, then 6 is implicit.
> > so at the very least you'd need two different settings for TIME and
> > TIMESTAMP.  But we don't enforce the spec's idea of default precision
> > for char, varchar, or numeric, so why start doing so with timestamp?
> 
> Sure, I'd forgotten about the 6 vs 0 differences. Easy to put back in.
> One of course might wonder why the spec *makes* them different.
> 
> "Why start doing so with timestamp?". SQL99 compliance for one thing ;)
> 
> I'm not sure I'm comfortable with the spec behavior, but without a
> discussion I wasn't comfortable implementing it another way.
> 
> > Essentially, what I want is for gram.y to set typmod to -1 when it
> > doesn't see a "(N)" decoration on TIME/TIMESTAMP.  I think everything
> > works correctly after that.
> 
> "... works correctly..." == "... works the way we'd like...". Right?
> 
> This is the start of the discussion I suppose. And I *expected* a
> discussion like this, since SQL99 seems a bit ill-tempered on this
> precision business. We shouldn't settle on a solution with just two of
> us, and I guess I'd like to hear from folks who have applications (the
> larger the better) who would care about this. Even better if their app
> had been running on some *other* DBMS. Anyone?
> 
>                          - Thomas




Re: Unhappiness with forced precision conversion

From
F Harvell
Date:
On Fri, 05 Oct 2001 19:35:48 -0000, Thomas Lockhart wrote:
> ...
> 
> Have you actually used ANSI SQL9x time zones? istm that "one offset fits
> all" is really ineffective in supporting real applications, but I'd like
> to hear about how other folks use it.
 Fortunately, most of our date/time information is self-referential.
I.e., we are usually looking at intervals between an initial date/
timestamp and the current date/timestamp.  This has effectively
eliminated the need to deal with time zones.

> > In this case, I believe that it would be preferable to stick with the
> > TIME(0) and TIMESTAMP(6) default precision.  In our applications, we
> > always specify the precision, so, the default precision is not a real
> > concern for us, however, for portability, I still suggest sticking
> > with the standard.
> 
> We are likely to use the 0/6 convention for the next release (though why
> TIME should default to zero decimal places and TIMESTAMP default to
> something else makes no sense).
 The only thing that I can think of is that originally, the DATE and
TIME types were integer values and that when the "new" TIMESTAMP data
type was "created" the interest was to increase the precision.  I
would guess, as you have also suggested, that the standards were based
upon existing implementations (along with an interest in backwards
compatibility).

Thanks,
F Harvell