Hi,
On 2022-01-06 21:39:57 -0500, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > I wonder if this will show the full set of spinlock contention issues - isn't
> > this only causing contention for one spinlock between two processes?
>
> I don't think so -- the point of using the "pipelined" variant is
> that messages are passing between all N worker processes concurrently.
> (With the proposed test, I see N processes all pinning their CPUs;
> if I use the non-pipelined API, they are busy but nowhere near 100%.)
My understanding of the shm_mq code is that that ends up with N shm_mq
instances, one for each worker. After all:
> * shm_mq.c
> * single-reader, single-writer shared memory message queue
These separate shm_mq instances forward messages in a circle,
"leader"->worker_1->worker_2->...->"leader". So there isn't a single contended
spinlock, but a bunch of different spinlocks, each with at most two backends
accessing it?
> It is just one spinlock, true, but I think the point is to gauge
> what happens with N processes all contending for the same lock.
> We could add some more complexity to use multiple locks, but
> does that really add anything but complexity?
Right, I agree that that's what we shoudl test - it's just no immediately
obvious to me that we are.
Greetings,
Andres Freund