Re: Reducing overhead of frequent table locks - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Reducing overhead of frequent table locks
Date
Msg-id BANLkTinD2=Ak2x7_U7eQL=8Ne=THAFJ44g@mail.gmail.com
Whole thread Raw
In response to Re: Reducing overhead of frequent table locks  (Noah Misch <noah@leadboat.com>)
Responses Re: Reducing overhead of frequent table locks  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Reducing overhead of frequent table locks  (Noah Misch <noah@leadboat.com>)
List pgsql-hackers
On Tue, May 24, 2011 at 10:03 AM, Noah Misch <noah@leadboat.com> wrote:
> On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
>> On Tue, May 24, 2011 at 5:07 AM, Noah Misch <noah@leadboat.com> wrote:
>> > This drops the part about only transferring fast-path entries once when a
>> > strong_lock_counts cell transitions from zero to one.
>>
>> Right: that's because I don't think that's what we want to do.  I
>> don't think we want to transfer all per-backend locks to the shared
>> hash table as soon as anyone attempts to acquire a strong lock;
>> instead, I think we want to transfer only those fast-path locks which
>> have the same locktag as the strong lock someone is attempting to
>> acquire.  If we do that, then it doesn't matter whether the
>> strong_lock_counts[] cell is transitioning from 0 to 1 or from 6 to 7:
>> we still have to check for strong locks with that particular locktag.
>
> Oh, I see.  I was envisioning that you'd transfer all locks associated with
> the strong_lock_counts cell; that is, all the locks that would now go directly
> to the global lock table when requested going forward.  Transferring only
> exact matches seems fine too, and then I agree with your other conclusions.

I took a crack at implementing this and ran into difficulties.
Actually, I haven't gotten to the point of actually testing whether it
works, but I'm worried about a possible problem with the algorithm.

When a strong lock is taken or released, we have to increment or
decrement strong_lock_counts[fasthashpartition].  Here's the question:
is that atomic?  In other words, suppose that strong_lock_counts[42]
starts out at 0, and two backends both do ++strong_lock_counts[42].
Are we guaranteed to end up with "2" in that memory location or might
we unluckily end up with "1"?  I think the latter is possible... and
some guard is needed to make sure that doesn't happen.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: storing TZ along timestamps
Next
From: Tom Lane
Date:
Subject: Re: Reducing overhead of frequent table locks