Re: reducing statistics write overhead - Mailing list pgsql-hackers

From Magnus Hagander
Subject Re: reducing statistics write overhead
Date
Msg-id 48C4D9A0.50207@hagander.net
Whole thread Raw
In response to Re: reducing statistics write overhead  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: reducing statistics write overhead  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Tom Lane wrote:
> Martin Pihlak <martin.pihlak@gmail.com> writes:
>> I had also previously experimented with stat() based polling but ran into
>> the same issues - no portable high resolution timestamp on files. I guess
>> stat() is unusable unless we can live with 1 second update interval for the
>> stats (eg. backend reads the file if it is within 1 second of the request).
> 
>> One alternative is to include a timestamp in the stats file header - the
>> backend can then wait on that -- check the timestamp, sleep, resend the
>> request, loop. Not particularly elegant, but easy to implement. Would this
>> be acceptable?
> 
> Timestamp within the file is certainly better than trying to rely on
> filesystem timestamps.  I doubt 1 sec resolution is good enough, and

We'd need half a second resolution just to keep up with the level we
have *now*, don't we?

> I'd also be worried about issues like clock skew between the
> postmaster's time and the filesystem's time.

Can that even happen on a local filesystem? I guess you could put the
file on NFS though, but that seems to be.. eh. sub-optimal.. in more
than one way..

//Magnus


pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: reducing statistics write overhead
Next
From: M2Y
Date:
Subject: Re: Some newbie questions