On Wed, Jun 10, 2015 at 10:20 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jan Wieck <jan@wi3ck.info> writes:
>> The attached patch demonstrates that less aggressive spinning and (much)
>> more often delaying improves the performance "on this type of machine".
>
> Hm. One thing worth asking is why the code didn't converge to a good
> value of spins_per_delay without help. The value should drop every time
> we had to delay, so under heavy contention it ought to end up small
> anyhow, no? Maybe we just need to alter the feedback loop a bit.
>
> (The comment about uniprocessors vs multiprocessors seems pretty wacko in
> this context, but at least the sign of the feedback term seems correct.)
The code seems to have been written with the idea that we should
converge to MAX_SPINS_PER_DELAY if spinning *ever* works. The way
that's implemented is that, if we get a spinlock without having to
delay, we add 100 to spins_per_delay, but if we have to delay at least
once (potentially hundreds of times), then we subtract 1.
spins_per_delay will be >900 most of the time even if only 1% of the
lock acquisitions manage to get the lock without delaying.
It is possible that, as you say, all we need to do is alter the
feedback loop so that, say, we subtract 1 every time we delay (rather
than every time we have at least 1 delay) and add 1 (rather than 100)
every time we don't end up needing to delay. I'm a bit concerned,
though, that this would tend to make spins_per_delay unstable.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company