Is there any alternative approach to measuring the performance as if the cache was empty? The goal is basically to calculate the max possible I/O time for a query, to get a range between min and max timing. It's ok if it's done during EXPLAIN ANALYZE call only, not for regular executions. One thing I can think of is even if the data in storage might be stale, issue read calls from it anyway, for measuring purposes. For EXPLAIN ANALYZE it should be fine as it doesn't return real data anyway. Is it possible that some pages do not exist in storage at all? Is there a different way to simulate something like that?
Vladimir Churyukin <vladimir@churyukin.com> writes: > There is often a need to test particular queries executed in the worst-case > scenario, i.e. right after a server restart or with no or minimal amount of > data in shared buffers. In Postgres it's currently hard to achieve (other > than to restart the server completely to run a single query, which is not > practical). Is there a simple way to introduce a GUC variable that makes > queries bypass shared_buffers and always read from storage? It would make > testing like that orders of magnitude simpler. I mean, are there serious > technical obstacles or any other objections to that idea in principle?
It's a complete non-starter. Pages on disk are not necessarily up to date; but what is in shared buffers is.