On 2024-Jul-11, Nathan Bossart wrote:
> I'm imagining something like this:
>
> struct timespec delay;
> TimestampTz end_time;
>
> end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);
>
> do
> {
> long secs;
> int microsecs;
>
> TimestampDifference(GetCurrentTimestamp(), end_time,
> &secs, µsecs);
>
> delay.tv_sec = secs;
> delay.tv_nsec = microsecs * 1000;
>
> } while (nanosleep(&delay, NULL) == -1 && errno == EINTR);
This looks nicer. We could deal with clock drift easily (in case the
sysadmin winds the clock back) by testing that tv_sec+tv_nsec is not
higher than the initial time to sleep. I don't know how common this
situation is nowadays, but I remember debugging a system years ago where
autovacuum was sleeping for a very long time because of that. I can't
remember now if we did anything in the code to cope, or just told
sysadmins not to do that anymore :-)
FWIW my (Linux's) nanosleep() manpage contains this note:
If the interval specified in req is not an exact multiple of the granu‐
larity underlying clock (see time(7)), then the interval will be rounded
up to the next multiple. Furthermore, after the sleep completes, there
may still be a delay before the CPU becomes free to once again execute
the calling thread.
It's not clear to me what happens if the time to sleep is zero, so maybe
there should be a "if tv_sec == 0 && tv_nsec == 0 then break" statement
at the bottom of the loop, to quit without sleeping one more time than
needed.
For Windows, this [1] looks like an interesting and possibly relevant
read (though maybe SleepEx already does what we want to do here.)
[1] https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"Having your biases confirmed independently is how scientific progress is
made, and hence made our great society what it is today" (Mary Gardiner)