On Thu, Jan 24, 2013 at 3:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The fundamental problem here is that the compiler, unless told otherwise
> by a compilation switch, believes it is entitled to assume that no
> integer overflow will happen anywhere in the program. Therefore, any
> error check that is looking for overflow *should* get optimized away.
> The only reason the compiler would fail to do that is if its optimizer
> isn't quite smart enough to prove that the code is testing for an
> overflow condition.
He's changing things to do
if (INT_MAX - a > b) PG_THROW ("a+b would overflow")
else x=a+b;
Why would a smarter compiler be licensed to conclude that it can
optimize away anything? "INT_MAX-a > b" is always well defined. And
the x = a+b won't execute unless it's well defined too. (Actually
we'll probably depend on the non-local exit behaviour of PG_THROW but
any compiler has to be able to deal with that anyways).
The point that we have no way to be sure we've gotten rid of any such
case is a good one. Logically as long as we're afraid of such things
we should continue to use -fwrapv and if we're using -fwrapv there's
no urgency to fix the code. But if we do get rid of all the known ones
then at least we have the option if we decide we've inspected the code
enough and had enough compilers check it to feel confident.
--
greg