Interesting - might be good to see your test script too (so we can
better understand how you are deciding if the runs are successful or not).
Also, any idea which rows are different? If you want something out of
the box that will do that for you see DBIx::Compare.
regards
Mark
On 28/05/17 04:12, Erik Rijkers wrote:
>
> ok, ok...
>
> ( The thing is, I am trying to pre-digest the output but it takes time )
>
> I can do this now: attached some output that belongs with this group
> of 100 1-minute runs:
>
> -- out_20170525_1426.txt
> 100 -- pgbench -c 64 -j 8 -T 60 -P 12 -n -- scale 25
> 82 -- All is well.
> 18 -- Not good.
>
> That is the worst set of runs of what I showed earlier.
>
> that is: out_20170525_1426.txt and
> 2x18 logfiles that the 18 failed runs produced.
> Those logfiles have names like:
> logrep.20170525_1426.1436.1.scale_25.clients_64.NOK.log
> logrep.20170525_1426.1436.2.scale_25.clients_64.NOK.log
>
> .1.=primary
> .2.=replica
>
> Please disregard the errors around pg_current_wal_location(). (it was
> caused by some code to dump some wal into zipfiles which obviously
> stopped working after the function was removed/renamed) There are also
> some uninportant errors from the test-harness where I call with the
> wrong port. Not interesting, I don't think.
>
>