On Wed, 16 Nov 2022 at 00:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Andres Freund <andres@anarazel.de> writes:
> > On 2022-11-15 23:14:42 +0000, Simon Riggs wrote:
> >> Hence more frequent compression is effective at reducing the overhead.
> >> But too frequent compression slows down the startup process, which
> >> can't then keep up.
> >> So we're just looking for an optimal frequency of compression for any
> >> given workload.
>
> > What about making the behaviour adaptive based on the amount of wasted effort
> > during those two operations, rather than just a hardcoded "emptiness" factor?
>
> Not quite sure how we could do that, given that those things aren't even
> happening in the same process. But yeah, it does feel like the proposed
> approach is only going to be optimal over a small range of conditions.
I have not been able to think of a simple way to autotune it.
> > I don't think the xids % KAX_COMPRESS_FREQUENCY == 0 filter is a good idea -
> > if you have a workload with plenty subxids you might end up never compressing
> > because xids divisible by KAX_COMPRESS_FREQUENCY will end up as a subxid
> > most/all of the time.
>
> Yeah, I didn't think that was too safe either.
> It'd be more reliable
> to use a static counter to skip all but every N'th compress attempt
> (something we could do inside KnownAssignedXidsCompress itself, instead
> of adding warts at the call sites).
I was thinking exactly that myself, for the reason of keeping it all
inside KnownAssignedXidsCompress().
--
Simon Riggs http://www.EnterpriseDB.com/