Re: [COMMITTERS] pgsql: Reduce the number of semaphores used under --disable-spinlocks. - Mailing list pgsql-hackers

From Andres Freund
Subject Re: [COMMITTERS] pgsql: Reduce the number of semaphores used under --disable-spinlocks.
Date
Msg-id 20140618195649.GE3968@awork2.anarazel.de
Whole thread Raw
In response to Re: [COMMITTERS] pgsql: Reduce the number of semaphores used under --disable-spinlocks.  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: [COMMITTERS] pgsql: Reduce the number of semaphores used under --disable-spinlocks.  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 2014-06-18 15:52:49 -0400, Robert Haas wrote:
> On Wed, Jun 18, 2014 at 3:32 PM, Andres Freund <andres@2ndquadrant.com> wrote:
> > Hi,
> >
> > On 2014-01-08 23:58:16 +0000, Robert Haas wrote:
> >> Reduce the number of semaphores used under --disable-spinlocks.
> >>
> >> Instead of allocating a semaphore from the operating system for every
> >> spinlock, allocate a fixed number of semaphores (by default, 1024)
> >> from the operating system and multiplex all the spinlocks that get
> >> created onto them.  This could self-deadlock if a process attempted
> >> to acquire more than one spinlock at a time, but since processes
> >> aren't supposed to execute anything other than short stretches of
> >> straight-line code while holding a spinlock, that shouldn't happen.
> >>
> >> One motivation for this change is that, with the introduction of
> >> dynamic shared memory, it may be desirable to create spinlocks that
> >> last for less than the lifetime of the server.  Without this change,
> >> attempting to use such facilities under --disable-spinlocks would
> >> quickly exhaust any supply of available semaphores.  Quite apart
> >> from that, it's desirable to contain the quantity of semaphores
> >> needed to run the server simply on convenience grounds, since using
> >> too many may make it harder to get PostgreSQL running on a new
> >> platform, which is mostly the point of --disable-spinlocks in the
> >> first place.
> >
> > I'm looking at the way you did this in the context of the atomics
> > patch. Won't:
> > s_init_lock_sema(volatile slock_t *lock)
> > {
> >         static int      counter = 0;
> >
> >         *lock = (++counter) % NUM_SPINLOCK_SEMAPHORES;
> > }
> >
> > lead to bad results if spinlocks are intialized after startup?
> 
> Why?

Because every further process will start with a copy of the postmaster's
counter or with 0 (EXEC_BACKEND)?

> > Essentially mapping new spinlocks to the same semaphore?
> 
> Yeah, but so what?  If we're mapping a bajillion spinlocks to the same
> semaphore already, what's a few more?

Well, imagine something like parallel query creating new segments,
including a spinlock (possibly via a lwlock) at runtime. If there were
several backends processing such queries this they'd all map to the same
semaphore.

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Josh Berkus
Date:
Subject: Re: idle_in_transaction_timeout
Next
From: Tom Lane
Date:
Subject: Re: [bug fix] Memory leak in dblink