Xi Wang <xi.wang@gmail.com> writes:
> On 11/18/12 6:47 PM, Tom Lane wrote:
>> I was against this style of coding before, and I still am.
>> For one thing, it's just about certain to introduce conflicts
>> against system headers.
> I totally agree.
> I would be happy to rewrite the integer overflow checks without
> using these explicit constants, but it seems extremely tricky to
> do so.
I thought about this some more and realized that we can handle it
by realizing that division by -1 is the same as negation, and so
we can copy the method used in int4um. So the code would look like
if (arg2 == -1){ result = -arg1; if (arg1 != 0 && SAMESIGN(result, arg1)) ereport(ERROR, ...);
PG_RETURN_INT32(result);}
(with rather more comments than this, of course). This looks faster
than what's there now, as well as removing the need for use of
explicit INT_MIN constants.
> Compared to (arg1 == INTn_MIN && arg2 == -1), the above check is
> not only more confusing and difficult to understand, but it also
> invokes undefined behavior (-INT_MIN overflow), which is dangerous:
> many C compilers will optimize away the check.
They'd better not, else they'll break many of our overflow checks.
This is why we use -fwrapv with gcc, for example. Any other compiler
with similar optimizations needs to be invoked with a similar switch.
> Since INTn_MIN and INTn_MAX are standard macros from the C library,
> can we assume that every C compiler should provide them in stdint.h?
Not every C compiler provides stdint.h, unfortunately --- otherwise
I'd not be so resistant to depending on this.
regards, tom lane