Re: Reducing overhead of frequent table locks - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: Reducing overhead of frequent table locks
Date
Msg-id BANLkTinhhL8e3hhrQUiL4H=7XHMY++oErg@mail.gmail.com
Whole thread Raw
In response to Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Re: Reducing overhead of frequent table locks  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
On Tue, May 24, 2011 at 6:37 PM, Robert Haas <robertmhaas@gmail.com> wrote:

>> That being said, it's a slight extra cost for all fast-path lockers to benefit
>> the strong lockers, so I'm not prepared to guess whether it will pay off.
>
> Yeah.  Basically this entire idea is about trying to make life easier
> for weak lockers at the expense of making it more difficult for strong
> lockers.  I think that's a good trade-off in general, but we might
> need to wait until we have an actual implementation to judge whether
> we've turned the dial too far.

I like this overall concept and like the way this has been described
with strong and weak locks. It seems very useful to me, since temp
tables can be skipped. That leaves shared DDL and we have done lots to
reduce the lock levels held and are looking at further reductions
also. I think even quite extensive delays are worth the trade-off.

I'd been looking at this also, though hadn't mentioned it previously
because I found an Oracle patent that discusses dynamically turning on
and off locking. So that's something to be aware of. IMHO if we
discuss this in terms of sharing/not sharing locking information then
it is sufficient to avoid the patent. That patent also discusses the
locking state change needs to wait longer than required.

I got a bit lost with the description of a potential solution. It
seemed like you were unaware that there is a local lock and a shared
lock table, maybe just me?

Design seemed relatively easy from there: put local lock table in
shared memory for all procs. We then have a use_strong_lock at proc
and at transaction level. Anybody that wants a strong lock first sets
use_strong_lock at proc and transaction level, then copies all local
lock data into shared lock table, double checking
TransactionIdIsInProgress() each time. Then queues for lock using the
now fully set up shared lock table. When transaction with strong lock
completes we do not attempt to reset transaction level boolean, only
at proc level, since DDL often occurs in groups and we want to avoid
flip-flopping quickly between lock share states. Cleanup happens by
regularly by bgwriter, perhaps every 10 seconds or so. All locks are
still visible for pg_locks.

Hopefully thats of use.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Proposal: Another attempt at vacuum improvements
Next
From: Robert Haas
Date:
Subject: Re: tackling full page writes