On Mon, 12 Aug 2024 at 20:34, John Naylor <johncnaylorls@gmail.com> wrote:
> I just remembered an article I read a while back by a compiler
> engineer who said that compilers have heuristics that treat NULL
> pointers as "unlikely" since the most common reason to test that is so
> we can actually dereference it and do something with it, and in that
> case a NULL pointer is an exceptional condition. He also said it
> wasn't documented so you can only see this by looking at the source
> code. Instead of taking the time to verify that, I created some toy
> examples and it seems to be true:
>
> https://godbolt.org/z/dv6M5ecY5
Interesting.
I agree with Andres' comment about the no increase in binary size
overhead. The use of unlikely() should just be a swap in the "call"
operands so the standard_Executor* function goes in the happy path.
(These are all sibling calls, so in reality, most compilers should do
"jmp" instead of "call")
I also agree with Tom with his comment about the tests for these
executor hooks not being hot. Each of them are executed once per
query, so an unlikely() isn't going to make any difference to the
performance of a query.
Taking into account what your analysis uncovered, I think what might
be worth spending some time on would be looking for hot var == NULL
tests where the common path is true and seeing if using likely() on
any of those makes a meaningful impact on performance. Maybe something
like [1] could be used to inject a macro or function call in each if
(<var> == NULL test to temporarily install some telemetry and look for
__FILE__ / __LINE__ combinations that are almost always true. Grouping
those results by __FILE__, __LINE__ and sorting by count(*) desc might
yield something interesting.
David
[1] https://coccinelle.gitlabpages.inria.fr/website/