Hi,
On 2022-06-16 22:22:28 -0700, Andres Freund wrote:
> On 2022-06-17 16:53:31 +1200, David Rowley wrote:
> > On Fri, 17 Jun 2022 at 15:33, Peter Geoghegan <pg@bowt.ie> wrote:
> > > Have you tried this with the insert benchmark [1]?
> >
> > I was mostly focusing on the performance of the hashed saop feature
> > after having removed the additional fields that pushed ExprEvalStep
> > over 64 bytes in 14.
> >
> > I agree it would be good to do further benchmarking to see if there's
> > anything else that's snuck into 15 that's slowed that benchmark down,
> > but we can likely work on that after we get the ExprEvalStep size back
> > to 64 bytes again.
>
> I did reproduce a regression between 14 and 15, using both pgbench -Mprepared
> -S (scale 1) and TPC-H Q01 (scale 5). Between 7-10% - not good, particularly
> that that's not been found so far. Fixing the json size issue gets that down
> to ~2%. Not sure what that's caused by yet.
The remaining difference looks like it's largely caused by the
enable_timeout_after(IDLE_STATS_UPDATE_TIMEOUT, ...) introduced as part of the
pgstats patch. It's only really visible when I pin a single connection pgbench
to the same CPU core as the server (which gives a ~16% boost here).
It's not the timeout itself - that we amortize nicely (via 09cf1d522). It's
that enable_timeout_after() does a GetCurrentTimestamp().
Not sure yet what the best way to fix that is.
We could just leave the timer active and add some gating condition indicating
idleness to the IdleStatsUpdateTimeoutPending body in ProcessInterrupts()?
Or we could add a timeout.c API that specifies the timeout?
pgstat_report_stat() uses GetCurrentTransactionStopTimestamp(), it seems like
it'd make sense to use the same for arming the timeout?
- Andres