What is the reasoning behind having two different implementations of the
datetime types, with slightly different behavior? Do we intend to keep
supporting both FP- and integer-based datetimes indefinitely?
Clearly, there are some costs associated with maintaining two different
implementations:
(1) It means we need to maintain two sets of code, with a corresponding
increase in the maintenance burden, the probability of introducing bugs,
etc., and making datetime behavior more difficult to test.
(2) In general, I think it is a fundamentally *bad* idea to have the
semantics of a builtin data type differ subtly depending on the value of
a configure parameter. It makes writing portable applications more
difficult, and can introduce hard-to-fix bugs.
So, are there any corresponding benefits to providing both FP and
integer datetimes? AFAIK the following differences in user-visible
behavior exist:
* integer timestamps have the same precision over their entire range (microsecond precision), whereas FP
timestampsdo not. This is clearly an advantage for integer timestamps.
* integer timestamps have a smaller range than FP timestamps (294276 AD vs. 5874897 AD). Are there actually
applications that use timestamps larger than 300,000 AD?
Unless there are lots of applications that need timestamps over such a
large range, ISTM integer datetimes are the better long-term approach,
and I don't see how the FP-based datetime code justifies the maintenance
burden. Notably, the FP datetime code doesn't depend on having a
functional int64 type, but in 2007, are there really any platforms we
care about that don't have such a type?
Therefore, I propose that we make integer datetimes the default (perhaps
for 8.4), and then eventually remove the floating-point datetime code.
Comments?
-Neil
P.S. One thing to verify is that the performance of integer datetimes is
no worse than the perf. of FP datetimes. I'd intuitively expect this to
be true, but it would be worth investigating.