Hi Lukas,
On 1/2/23 20:50, Lukas Fittl wrote:
> Thanks for continuing to work on this patch, and my apologies for
> silence on the patch.
It would be great if you could review it.
Please also share your thoughts around exposing the used clock source as
GUC and renaming INSTR_TIME_GET_DOUBLE() to _SECS().
I rebased again on master because of [1]. Patches attached.
>
> Its been hard to make time, and especially so because I typically
> develop on an ARM-based macOS system where I can't test this directly
> - hence my tests with virtualized EC2 instances, where I ran into the
> timing oddities.
That's good and bad. Bad to do the development and good to test the
implementation on more virtualized setups; given that I also encountered
"interesting" behavior on VMWare (see my previous mails).
>
> On Mon, Jan 2, 2023 at 5:28 AM David Geier <geidav.pg@gmail.com> wrote:
>
> The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other
> variants
> return double. This seems error prone. What about renaming the
> function
> or also have the function return a double and cast where necessary at
> the call site?
>
>
> Minor note, but in my understanding using a uint64 (where we can) is
> faster for any simple arithmetic we do with the values.
That's true. So the argument could be that for seconds and milliseconds
we want the extra precision while microseconds are precise enough.
Still, we could also make the seconds and milliseconds conversion code
integer only and e.g. return two integers with the value before and
after the comma. FWICS, the functions are nowhere used in performance
critical code, so it doesn't really make a difference performance-wise.
>
> +1, and feel free to carry this patch forward - I'll try to make an
> effort to review my earlier testing issues again, as well as your
> later improvements to the patch.
Moved to the current commit fest. Will you become reviewer?
>
> Also, FYI, I just posted an alternate idea for speeding up EXPLAIN
> ANALYZE with timing over in [0], using a sampling-based approach to
> reduce the timing overhead.
Interesting idea. I'll reply with some thoughts on the corresponding thread.
[1]
https://www.postgresql.org/message-id/flat/CALDaNm3kRBGPhndujr9JcjjbDCG3anhj0vW8b9YtbXrBDMSvvw%40mail.gmail.com
--
David Geier
(ServiceNow)