Moin,
On 2020-02-07 15:42, Emre Hasegeli wrote:
>> > The patch looks unduly invasive to me, but I think that it might be
>> > right that we should go back to a macro-based implementation, because
>> > otherwise we don't have a good way to be certain that the function
>> > parameter won't get evaluated first.
>>
>> I'd first like to see some actual evidence of this being a problem,
>> rather than just the order of the checks.
>
> There seem to be enough evidence of this being the problem. We are
> better off going back to the macro-based implementation. I polished
> Keisuke Kuroda's patch commenting about the performance issue, removed
> the check_float*_val() functions completely, and added unlikely() as
> Tom Lane suggested. It is attached. I confirmed with different
> compilers that the macro, and unlikely() makes this noticeably faster.
Hm, the diff has the macro tests as:
+ if (unlikely(isinf(val) && !(inf_is_valid)))
...
+ if (unlikely((val) == 0.0 && !(zero_is_valid)))
But the comment does not explain that this test has to be in that
order, or the compiler will for non-constant arguments evalute
the (now) right-side first. E.g. if I understand this correctly:
+ if (!(zero_is_valid) && unlikely((val) == 0.0)
would have the same problem of evaluating "zero_is_valid" (which
might be an isinf(arg1) || isinf(arg2)) first and so be the same thing
we try to avoid with the macro? Maybe adding this bit of info to the
comment makes it clearer?
Also, a few places use the macro as:
+ CHECKFLOATVAL(result, true, true);
which evaluates to a complete NOP in both cases. IMHO this could be
replaced with a comment like:
+ // No CHECKFLOATVAL() needed, as both inf and 0.0 are valid
(or something along the lines of "no error can occur"), as otherwise
CHECKFLOATVAL() implies to the casual reader that there are some checks
done, while in reality no real checks are done at all (and hopefully
the compiler optimizes everything away, which might not be true for
debug builds).
--
Best regards,
Tels