Re: Floating-point timestamps versus Range Types - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Floating-point timestamps versus Range Types
Date
Msg-id 23592.1287427768@sss.pgh.pa.us
Whole thread Raw
In response to Re: Floating-point timestamps versus Range Types  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Floating-point timestamps versus Range Types
List pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
> A more interesting question is whether and how we can ease the
> migration path from float timestamps to integer timestamps.  Even
> without range types, if someone does have a UNIQUE index on a
> timestamp column, could they get an error if they dump from a
> float-timestamp version of PG and restore onto an integer-timestamp
> version?

In principle yes, but I think the risk is pretty hypothetical.
Currently (2010, ten years out from the internal epoch) the effective
resolution of IEEE-float-based timestamps is about a tenth of a
microsecond.  Thus for example, on 8.3 I get

regression=# select '2010-10-18 14:35:14.6164431-04'::timestamptz = '2010-10-18
14:35:14.6164432-04'::timestamptz;?column?
 
----------f
(1 row)

regression=# select '2010-10-18 14:35:14.6164431-04'::timestamptz = '2010-10-18
14:35:14.6164431-04'::timestamptz;?column?
 
----------t
(1 row)

regression=# select '2010-10-18 14:35:14.6164431-04'::timestamptz = '2010-10-18
14:35:14.61644311-04'::timestamptz;?column?
 
----------t
(1 row)

whereas an int-timestamp build sees these inputs as all the same.
Thus, to get into trouble you'd need to have a unique index on data that
conflicts at the microsecond scale but not at the tenth-of-a-microsecond
scale.  This seems implausible.  In particular, you didn't get any such
data from now(), which relies on Unix APIs that don't go below
microsecond precision.  You might conceivably have entered such data
externally, as I did above, but you'd have to not notice/care that it
wasn't coming back out at the same precision.  And you'd have to never
have dumped/reloaded using pg_dump, or the low order digits would have
vanished already.  And you'd have to not be dealing with data outside
a range of roughly 1900-2100, or the precision of floats would actually
be worse than ints.

So the argument seems academic to me ...
        regards, tom lane


pgsql-hackers by date:

Previous
From: Dave Cramer
Date:
Subject: Re: create tablespace fails silently, or succeeds improperly
Next
From: Bruce Momjian
Date:
Subject: Re: create tablespace fails silently, or succeeds improperly