Re: s_lock() seems too aggressive for machines with many sockets - Mailing list pgsql-hackers

From Andres Freund
Subject Re: s_lock() seems too aggressive for machines with many sockets
Date
Msg-id 20150610155813.GI10551@awork2.anarazel.de
Whole thread Raw
In response to Re: s_lock() seems too aggressive for machines with many sockets  (Jan Wieck <jan@wi3ck.info>)
Responses Re: s_lock() seems too aggressive for machines with many sockets
List pgsql-hackers
On 2015-06-10 11:51:06 -0400, Jan Wieck wrote:
> >ret = pg_atomic_fetch_sub_u32(&buf->state, 1);
> >
> >if (ret & BM_PIN_COUNT_WAITER)
> >{
> >    pg_atomic_fetch_sub_u32(&buf->state, BM_PIN_COUNT_WAITER);
> >    /* XXX: deal with race that another backend has set BM_PIN_COUNT_WAITER */
> >}
> 
> There are atomic AND and OR functions too, at least in the GCC built in
> parts. We might be able to come up with pg_atomic_...() versions of them and
> avoid the race part.

The race part isn't actually about that. It's that BM_PIN_COUNT_WAITER
might have been set after the fetch_sub above.

fetch_sub() itself would actually be race-free to unset a flag, if it's
a proper power of two.

> While some locks may be avoidable, some may be replaced by atomic
> operations, I believe that we will still be left with some of them.

Besides the two xlog.c ones and lwlock.c, which are hot otherwise? I
think we pretty much removed the rest?

> Optimizing spinlocks if we can do it in a generic fashion that does not hurt
> other platforms will still give us something.

Sure, I'm just doubtful that's easy.

I think we should just gank spinlocks asap. The hard part is removing
them from lwlock.c's slow path and the buffer headers imo. After that we
should imo be fine replacing them with lwlocks.



pgsql-hackers by date:

Previous
From: "Shulgin, Oleksandr"
Date:
Subject: Fix logical decoding sendtime update
Next
From: Andres Freund
Date:
Subject: Re: Fix logical decoding sendtime update