At some point this patch switched from rdtsc to rdtscp, which imo largely
negates the point of it. What lead to that?
From what I can gather, it appears this was an oversight when David first reapplied the work on the instr_time changes that were committed.
I've come back to this and rebased this, as well as:
- Corrected the use of RDTSCP to RDTSC in pg_get_ticks_fast
- Check 16H register if 15H register does not contain frequency information (per research, relevant for some CPUs)
- Fixed incorrect reporting in pg_test_timing due to too small histogram (32 => 64 bits)
- Fixed indentation per pgindent
- Added support for VMs running under KVM/VMware Hypervisors
On that last item, this does indeed make a difference on VMs, contrary to the code comment in earlier versions (and I've not seen any odd behaviors again, FWIW):
On a c5.xlarge (Skylake-SP or Cascade Lake) on AWS, with the same test as done initially in this thread:
SELECT COUNT(*) FROM lotsarows;
Time: 974.423 ms
EXPLAIN (ANALYZE, TIMING OFF) SELECT COUNT(*) FROM lotsarows;
Time: 1336.196 ms (00:01.336)
Without patch:
EXPLAIN (ANALYZE) SELECT COUNT(*) FROM lotsarows;
Time: 2165.069 ms (00:02.165)
Per loop time including overhead: 22.15 ns
With patch:
EXPLAIN (ANALYZE, TIMING ON) SELECT COUNT(*) FROM lotsarows;
Time: 1654.289 ms (00:01.654)
Per loop time including overhead: 9.81 ns
I'm registering this again in the current commitfest to help reviews.
Open questions I have:
- Could we rely on checking whether the TSC timesource is invariant (via CPUID), instead of relying on Linux choosing it as a clocksource?
- For the Hypervisor CPUID checks I had to rely on __cpuidex which is only available on newer GCC versions (
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95973), how do we best check for its presence? (compiler version, or rather configure check?) -- note this is also the reason the patch fails the clang compiler warning check in CI, despite clang having support in recent versions (
https://reviews.llvm.org/D121653)
Thanks,
Lukas