Re: FlexLocks - Mailing list pgsql-hackers

From Pavan Deolasee
Subject Re: FlexLocks
Date
Msg-id CABOikdPd-YmicyV8kZw25xksQZ4oJy7tS89yxHbadoh=Awe8tQ@mail.gmail.com
Whole thread Raw
In response to FlexLocks  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: FlexLocks
List pgsql-hackers
On Tue, Nov 15, 2011 at 7:20 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> The lower layer I called "FlexLocks",
> and it's designed to allow a variety of locking implementations to be
> built on top of it and reuse as much of the basic infrastructure as I
> could figure out how to make reusable without hurting performance too
> much.  LWLocks become the anchor client of the FlexLock system; in
> essence, most of flexlock.c is code that was removed from lwlock.c.
> The second patch, procarraylock.c, uses that infrastructure to define
> a new type of FlexLock specifically for ProcArrayLock.  It basically
> works like a regular LWLock, except that it has a special operation to
> optimize ProcArrayEndTransaction().  In the uncontended case, instead
> of acquiring and releasing the lock, it just grabs the lock, observes
> that there is no contention, clears the critical PGPROC fields (which
> isn't noticeably slower than updating the state of the lock would be)
> and releases the spin lock.

(Robert, we already discussed this a bit privately, so apologies for
duplicating this here)

Another idea is to have some sort of shared work queue mechanism which
might turn out to be more manageable and extendable. What I am
thinking about is having a {Request, Response} kind of structure per
backend in shared memory. An obvious place to hold them is in PGPROC
for every backend. We the have a new API like LWLockExecute(lock,
mode, ReqRes). The caller first initializes the ReqRes structure with
the work it needs get done and then calls LWLockExecute with that.
IOW, the code flow would look like this:

<Initialize the Req/Res structure with request type and input data>
LWLockExecute(lock, mode, ReqRes)
<Consume Response and proceed further>

If the lock is available in the desired mode, LWLockExecute() will
internally finish the work and return immediately. If the lock is
contended, the process would sleep. When current holder of the lock
finishes its work and calls LWLockRelease() to release the lock, it
would not only find the processes to wake up, but would also go
through their pending work items and complete them before waking them
up. The Response area will be populated with the result.

I think this general mechanism will be useful for many users of
LWLock, especially those who do very trivial updates/reads from the
shared area, but still need synchronization. One example that Robert
has already found helping a lot if ProcArrayEndTransaction. Also, even
though both shared and exclusive waiters can use this mechanism, it
may make more sense to the exclusive waiters because of the
exclusivity. For sake of simplicity, we can choose to force a
semantics that when LWLockExecute returns, the work is guaranteed to
be done, either by self or some other backend. That will keep the code
simpler for users of this new API.

Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Minor optimisation of XLogInsert()
Next
From: Robert Haas
Date:
Subject: Re: Minor optimisation of XLogInsert()