... BTW, another resource worth looking at is src/bin/pg_test_timing/
which we just improved a few days ago [1]. What I see on two different
Linux-on-Intel boxes is that the loop time that that reports is 16 ns
and change, and the clock readings appear accurate to full nanosecond
precision. Changing instr_time.h to use CLOCK_MONOTONIC_COARSE, the
loop time drops to a bit over 5 ns, which would certainly be a nice
win if it were cost-free. But the clock precision degrades to 1 ms.
It is really hard to believe that giving up a factor of a million
in clock precision is going to be an acceptable tradeoff for saving
~10 ns per clock reading. Maybe with a lot of fancy statistical
arm-waving, and an assumption that people always look at averages
over long query runs, you could make a case that this change isn't
going to result in a disaster. But EXPLAIN's results are surely
going to become garbage-in-garbage-out for any query that doesn't
run for (at least) hundreds of milliseconds.
regards, tom lane
[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0b096e379e6f9bd49d38020d880a7da337e570ad