On Mon, 2010-10-18 at 14:49 -0400, Tom Lane wrote:
> whereas an int-timestamp build sees these inputs as all the same.
> Thus, to get into trouble you'd need to have a unique index on data that
> conflicts at the microsecond scale but not at the tenth-of-a-microsecond
> scale. This seems implausible. In particular, you didn't get any such
> data from now(), which relies on Unix APIs that don't go below
> microsecond precision. You might conceivably have entered such data
> externally, as I did above, but you'd have to not notice/care that it
> wasn't coming back out at the same precision.
You can also get there via interval math, like multiplying by a numeric.
That seems slightly more plausible.
> So the argument seems academic to me ...
With UNIQUE indexes I agree completely. If nothing else, who puts a
UNIQUE index on high-precision timestamps? And the problem has existed
for a long time already, it's nothing new.
With Exclusion Constraints, it's slightly less academic, and it's a new
addition. Still pretty far-fetched; but at least plausible, which is why
I brought it up.
However, I won't argue with the "don't do anything" approach to
float-timestamps.
Regards,Jeff Davis