Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
>> The results were interesting. While the small overlap between
>> samples from the two builds at most levels means that this was
>> somewhat unlikely to be just sampling noise, there could have
>> been alignment issues that account for some of the differences.
>> In short, the strict aliasing build always beat the other with 4
>> clients or fewer (on this 4 core machine), but always lost with
>> more than 4 clients.
>
> That is *weird*.
Yeah, my only theories are that it was an unlucky set of samples
(which seems a little thin looking at the numbers) or that some of
the optimizations in -O3 are about improving pipelining at what
would otherwise be an increase in cycles, but that context switching
breaks up the pipelining enough that it's a net loss at high
concurrency. That doesn't seem quite as thin as the other
explanation, but it's not very satisfying without some sort of
confirmation.
>> Also, is there something I should do to deal with the warnings
>> before this would be considered a meaningful test?
>
> Dunno ... where were the warnings exactly?
All 10 were like this: warning: dereferencing type-punned pointer will break strict-aliasing rules
The warning is about reading a union using a different type than was
last stored there. It seems like that might sometimes be legitimate
reasons to do that, and that if it was broken with strict aliasing
it might be broken without. But strict aliasing is new territory
for me.
> Also, did you run the regression tests (particularly the parallel
> version) against the build?
Yes. The normal parallel `make check-world`, the `make
installcheck-world` against an install with
default_transaction_isolation = 'serializable' and
max_prepared_transactions = 10, and `make -C src/test/isolation
installcheck`. All ran without problem.
I'm inclined to try -O3 and -strict-aliasing separately, with a more
iterations; but I want to fix anything that's wrong with the
aliasing first.
-Kevin