Another thought here ... I'm looking at the sign hack
+ if (IntervalStyle == INTSTYLE_SQL_STANDARD &&
+ field[0][0] == '-' && i == 1 &&
+ field[i][0] != '-' && field[i][0] != '+')
+ {
+ /*----------
+ * The SQL Standard defines the interval literal
+ * '-1 1:00:00'
+ * to mean "negative 1 days and negative one hours"
+ * while Postgres traditionally treated this as
+ * meaning "negative 1 days and positive one hours".
+ * In SQL_STANDARD style, flip the sign to conform
+ * to the standard's interpretation.
and not liking it very much. Yes, it does the intended thing for strict
SQL-spec input, but it seems to produce a bunch of weird corner cases
for non-spec input. Consider
-1 1:00:00 flips the sign- 1 1:00:00 doesn't flip the sign-1 day 1:00:00 doesn't flip the
sign-2008-101:00:00 flips the sign-2008-10 1 doesn't flip the sign-2008 years 1:00:00 doesn't flip the
sign
If the rule were that it never flipped the sign for non-SQL-spec input
then I think that'd be okay, but case 4 here puts the lie to that.
I'm also not entirely sure if case 2 is allowed by SQL spec or not,
but if it is then we've got a problem with that; and even if it isn't
it's awfully hard to explain why it's treated differently from case 1.
I'm inclined to think we need a more semantically-based instead of
syntactically-based rule. For instance, if first field is negative and
no other field has an explicit sign, then force all fields to be <= 0.
This would probably have to be applied at the end of DecodeInterval
instead of on-the-fly within the loop.
Thoughts?
regards, tom lane