Re: Reducing overhead of frequent table locks - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: Reducing overhead of frequent table locks
Date
Msg-id BANLkTi=6kBJ0=opKEFrGM9akL5BSwP0EZA@mail.gmail.com
Whole thread Raw
In response to Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Fri, May 13, 2011 at 5:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, May 13, 2011 at 4:16 PM, Noah Misch <noah@leadboat.com> wrote:

>> I wonder if, instead, we could signal all backends at
>> marker 1 to dump the applicable parts of their local (memory) lock tables to
>> files.  Or to another shared memory region, if that didn't mean statically
>> allocating the largest possible required amount.  If we were willing to wait
>> until all backends reach a CHECK_FOR_INTERRUPTS, they could instead make the
>> global insertions directly.  That might yield a decent amount of bug swatting to
>> fill in missing CHECK_FOR_INTERRUPTS, though.
>
> I've thought about this; I believe it's unworkable.  If one backend
> goes into the tank (think: SIGSTOP, or blocking on I/O to an
> unreadable disk sector) this could lead to cascading failure.

Would that risk be substantially worse than it currently is?  If a
backend goes into the tank while holding access shared locks, it will
still block access exclusive locks until it recovers.  And those
queued access exclusive locks will block new access shared locks from
other backends.   How much is risk magnified by the new approach,
going from "any backend holding the lock is tanked" to "any process at
all is tanked"?

What I'd considered playing with in the past is having
LockMethodLocalHash hang on to an Access Shared lock even after
locallock->nLocks == 0, so that re-granting the lock would be a purely
local operation.  Anyone wanting an Access Exclusive lock and not
immediately getting it would have to send out a plea (via SINVA?) for
other processes to release their locallock->nLocks == 0 locks.  But
this would suffer from the same problem of tanked processes.

Cheers,

Jeff


pgsql-hackers by date:

Previous
From: Leon Smith
Date:
Subject: Re: Exporting closePGconn from libpq
Next
From: Jaime Casanova
Date:
Subject: DOMAINs and CASTs