On 11/19/12 11:04 AM, Tom Lane wrote:
> I thought about this some more and realized that we can handle it
> by realizing that division by -1 is the same as negation, and so
> we can copy the method used in int4um. So the code would look like
>
> if (arg2 == -1)
> {
> result = -arg1;
> if (arg1 != 0 && SAMESIGN(result, arg1))
> ereport(ERROR, ...);
> PG_RETURN_INT32(result);
> }
>
> (with rather more comments than this, of course). This looks faster
> than what's there now, as well as removing the need for use of
> explicit INT_MIN constants.
Hah, I used exactly the same code in my initial patch. :)
See below.
> They'd better not, else they'll break many of our overflow checks.
> This is why we use -fwrapv with gcc, for example. Any other compiler
> with similar optimizations needs to be invoked with a similar switch.
Yes, that's the scary part.
Even with -fwrapv in gcc (I tried 4.6/4.7/4.8), it still optimizes away
my overflow check!
if (arg1 < 0 && -arg1 < 0) { ... }
Fortunately, it doesn't optimize way checks using SAMESIGN() for now.
Who knows what could happen in the next version..
Clang's -fwrapv works as expected.
PathScale and Open64 (and probably icc) don't support -fwrapv at all.
Not sure if they have similar options. They do such optimizations, too.
The reality is that C compilers are not friendly to postcondition
checking; they consider signed integer overflow as undefined behavior,
so they do whatever they want to do. Even workaround options like
-fwrapv are often broken, not to mention that they may not even have
those options.
That's why I am suggesting to avoid postcondition checking for signed
integer overflow.
> Not every C compiler provides stdint.h, unfortunately --- otherwise
> I'd not be so resistant to depending on this.
Sigh.. You are right.
- xi