Thread: reducing the overhead of frequent table locks, v4

reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
I didn't get a lot of comments on my the previous version of my patch
to accelerate table locks.

http://archives.postgresql.org/pgsql-hackers/2011-06/msg00953.php

Here's a new version anyway.  In this version, I have:

1. Made pg_locks show fast-path locks as well as regular locks, with a
new Boolean column to distinguish.
2. Renamed fpLWLock to backendLock, on the belief that we're going to
want something like this for quite a bit more than just this one
patch.
3. Removed some debugging code.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: reducing the overhead of frequent table locks, v4

From
Thom Brown
Date:
On 27 June 2011 15:13, Robert Haas <robertmhaas@gmail.com> wrote:
> I didn't get a lot of comments on my the previous version of my patch
> to accelerate table locks.
>
> http://archives.postgresql.org/pgsql-hackers/2011-06/msg00953.php
>
> Here's a new version anyway.  In this version, I have:
>
> 1. Made pg_locks show fast-path locks as well as regular locks, with a
> new Boolean column to distinguish.
> 2. Renamed fpLWLock to backendLock, on the belief that we're going to
> want something like this for quite a bit more than just this one
> patch.
> 3. Removed some debugging code.

Are you able to benchmark this again, and possibly get Stefan to give
it a go too?

Thom


Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Thu, Jun 30, 2011 at 6:28 PM, Thom Brown <thom@linux.com> wrote:
> On 27 June 2011 15:13, Robert Haas <robertmhaas@gmail.com> wrote:
>> I didn't get a lot of comments on my the previous version of my patch
>> to accelerate table locks.
>>
>> http://archives.postgresql.org/pgsql-hackers/2011-06/msg00953.php
>>
>> Here's a new version anyway.  In this version, I have:
>>
>> 1. Made pg_locks show fast-path locks as well as regular locks, with a
>> new Boolean column to distinguish.
>> 2. Renamed fpLWLock to backendLock, on the belief that we're going to
>> want something like this for quite a bit more than just this one
>> patch.
>> 3. Removed some debugging code.
>
> Are you able to benchmark this again, and possibly get Stefan to give
> it a go too?

I probably can, but there's not going to be any significant
difference.  The patch hasn't changed much.  Note that the version
where Stefan saw a big performance regression when he pounded the
server was actually a combination of this patch and the fast vxid
locks patch.  If we only apply this one, I believe there will still be
enough lock manager contention to stifle the worst of those effects.

Of course, even if we apply both, there's only going to be a
regression on servers with >32 cores, I think, and even then only when
the number of clients is greater than the number of cores.  Everyone
else will see a benefit.  I think that the next thing for me to tackle
is the SInvalReadLock spinlock contention, but what I'm really hurting
for is some code review.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
On Thu, 2011-06-30 at 19:25 -0400, Robert Haas wrote:
> I'm really hurting
> for is some code review.

I'm trying to get my head into this patch. I have a couple questions:

Does this happen to be based on some academic research? I don't
necessarily expect it to be; just thought I'd ask.

Here is my high-level understanding of the approach, please correct me
where I'm mistaken:

Right now, concurrent activity on the same object, even with weak locks,
causes contention because everything has to hit the same global lock
partition. Because we expect an actual conflict to be rare, this patch
kind of turns the burden upside down such that:(a) those taking weak locks need only acquire a lock on their own lock
in their own PGPROC, which means that it doesn't contend with anyone
else taking out a weak lock; and(b) taking out a strong lock requires a lot more work, because it needs
to look at every backend in the proc array to see if it has conflicting
locks.

Of course, those things both have some complexity, because the
operations need to be properly synchronized. You force a valid schedule
by using the memory synchronization guarantees provided by taking those
per-backend locks rather than a centralized lock, thus still avoiding
lock contention in the common (weak locks only) case.

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Wed, Jul 6, 2011 at 2:02 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Thu, 2011-06-30 at 19:25 -0400, Robert Haas wrote:
>> I'm really hurting
>> for is some code review.
>
> I'm trying to get my head into this patch. I have a couple questions:
>
> Does this happen to be based on some academic research? I don't
> necessarily expect it to be; just thought I'd ask.

I did spend some time Googling around for papers and some of the other
approaches I've tried (mostly unsuccessfully, thus far) were based on
those papers, but I don't remember running across anything that
resembled the approach taken in the patch.  I think that the patch
basically came out what I know about how PostgreSQL works: namely, we
take tons of locks, but mostly of a sort that don't conflict with each
other.

> Here is my high-level understanding of the approach, please correct me
> where I'm mistaken:
>
> Right now, concurrent activity on the same object, even with weak locks,
> causes contention because everything has to hit the same global lock
> partition. Because we expect an actual conflict to be rare, this patch
> kind of turns the burden upside down such that:
>  (a) those taking weak locks need only acquire a lock on their own lock
> in their own PGPROC, which means that it doesn't contend with anyone
> else taking out a weak lock; and
>  (b) taking out a strong lock requires a lot more work, because it needs
> to look at every backend in the proc array to see if it has conflicting
> locks.
>
> Of course, those things both have some complexity, because the
> operations need to be properly synchronized. You force a valid schedule
> by using the memory synchronization guarantees provided by taking those
> per-backend locks rather than a centralized lock, thus still avoiding
> lock contention in the common (weak locks only) case.

You got it.  That's a very good summary.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
On Mon, 2011-06-27 at 10:13 -0400, Robert Haas wrote:
> I didn't get a lot of comments on my the previous version of my patch
> to accelerate table locks.
> 
> http://archives.postgresql.org/pgsql-hackers/2011-06/msg00953.php
> 
> Here's a new version anyway.  In this version, I have:

I am trying to figure out holdsStrongLockCount. It's declared as an
integer, but (unless cscope is failing me) is only ever set to 0 or 1.
It's never incremented or decremented. It looks like it's supposed to be
a boolean indicating that the lock should decrement something in
FastPathStrongLocks when released.

Furthermore, in AtPrepare_Locks(), the comment says:

/** Arrange not to release any strong lock count held by this lock* entry.  We must retain the count until the prepared
transaction*is committed or rolled back.*/
 
locallock->holdsStrongLockCount = 0;

But doesn't seem to "arrange" much, as far as I can tell.

I think the 2PC code is still correct, because it infers from the
lockmode that the FastPathStrongLocks counter needs to be incremented.
However, why doesn't other code (RemoveLocalLock is the only reader)
make a similar inference?

Can we just get rid of holdsStrongLockCount completely?

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
A few very minor things that I noticed:

1. You use pre-increment in "for" loops (e.g. FastPathGrantLock). The
rest of the code seems to use post-increment in "for" loops, so you
might as well stick to the convention in cases where the two have
identical meaning.

2. Typo in the README: "acquire the every" should be "acquire every".

3. Typo in comment in lock.c: "last because most" should be "last
because it's the most". (I just realized that's actually not your typo,
so fixing it is optional).

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Jul 10, 2011, at 4:15 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Mon, 2011-06-27 at 10:13 -0400, Robert Haas wrote:
>> I didn't get a lot of comments on my the previous version of my patch
>> to accelerate table locks.
>>
>> http://archives.postgresql.org/pgsql-hackers/2011-06/msg00953.php
>>
>> Here's a new version anyway.  In this version, I have:
>
> I am trying to figure out holdsStrongLockCount. It's declared as an
> integer, but (unless cscope is failing me) is only ever set to 0 or 1.
> It's never incremented or decremented. It looks like it's supposed to be
> a boolean indicating that the lock should decrement something in
> FastPathStrongLocks when released.

Yes.

> Furthermore, in AtPrepare_Locks(), the comment says:
>
> /*
> * Arrange not to release any strong lock count held by this lock
> * entry.  We must retain the count until the prepared transaction
> * is committed or rolled back.
> */
> locallock->holdsStrongLockCount = 0;
>
> But doesn't seem to "arrange" much, as far as I can tell.

Well, it arranges not to release the strong lock count when the LOCALLOCK is nuked; instead, we need to release it when
thefinal COMMIT PREPARED or ROLLBACK PREPARED happens. 

> I think the 2PC code is still correct, because it infers from the
> lockmode that the FastPathStrongLocks counter needs to be incremented.
> However, why doesn't other code (RemoveLocalLock is the only reader)
> make a similar inference?

Because you could hit an ERROR in LockAcquire, and you need to know, at the time you clean up the LOCALLOCK, whether or
nota strong lock count had been acquired.  I didn't originally have holdsStrongLockCount, but it seemed really fragile
thatway. 

...Robert

Re: reducing the overhead of frequent table locks, v4

From
Florian Weimer
Date:
* Jeff Davis:

> Does this happen to be based on some academic research? I don't
> necessarily expect it to be; just thought I'd ask.

Paul E. McKenney's thesis contains a few references.  It's called
"asymmetrical reader-writer locking" there, and Ingo Molnar implemented
this as "brlock" in Linux 2.4.  The earliest citation seems to be
W.C. Hsiesh, W. E. Weihl, "Scalable reader-writer locks for parallel
systems.", MIT-LCS-TR-521, published in 1991.

--
Florian Weimer                <fweimer@bfk.de>
BFK edv-consulting GmbH       http://www.bfk.de/
Kriegsstraße 100              tel: +49-721-96201-1
D-76133 Karlsruhe             fax: +49-721-96201-99


Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
* ... It's also possible that* we're acquiring a second or third lock type on a relation we have* already locked using
thefast-path, but for now we don't worry about* that case either.*/
 

How common is that case? There are only 16 entries in the fast path lock
table, so it seems like it would frequently fill up. So, if there are
common code paths that acquire different weak locks on the same
relation, then we might commonly miss a fast-path opportunity.

One path that acquires multiple weak locks is an INSERT INTO foo
SELECT ... FROM foo ...

Is that common enough to worry about?

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Jul 11, 2011, at 11:45 AM, Jeff Davis <pgsql@j-davis.com> wrote:
> * ... It's also possible that
> * we're acquiring a second or third lock type on a relation we have
> * already locked using the fast-path, but for now we don't worry about
> * that case either.
> */
>
> How common is that case? There are only 16 entries in the fast path lock
> table, so it seems like it would frequently fill up. So, if there are
> common code paths that acquire different weak locks on the same
> relation, then we might commonly miss a fast-path opportunity.

Yeah, that might be worth some more thought.

I haven't been that worried about overflow of the fast path table. If you are locking more than 16 relations at once,
youprobably have at least 5 tables in the query, maybe more - it depends in how many indexes you have, of course.  My
assumptionhas been that at that point you're going to spend enough time planning and executing the query that the lock
managerwill no longer be a major bottleneck.  Of course, there might be cases where that isn't so. 

The trade-off here is that if we don't skip the fast path when we think the table's full, we slow down lock
acquisitions17 through infinity.  I was reluctant to do that. I've been operating on the theory that the fast path
shouldexist not because it's in general better (and thus we must be certain to use it whenever possible) but because it
relievesunbearable pressure in specific problematic cases (and thus outside of those cases we just need it to stay out
ofthe way).  But it's possible that this is an overly simplistic mental model and not the best trade-off in practice. 

...Robert




Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
On Tue, 2011-07-12 at 07:55 -0500, Robert Haas wrote:
> I haven't been that worried about overflow of the fast path table. If
> you are locking more than 16 relations at once, you probably have at
> least 5 tables in the query, maybe more - it depends in how many
> indexes you have, of course.  My assumption has been that at that
> point you're going to spend enough time planning and executing the
> query that the lock manager will no longer be a major bottleneck.  Of
> course, there might be cases where that isn't so.

Yeah, I think you're right here. It's probably not much of a practical
concern.

I was slightly bothered because it seemed a little unpredictable. But it
seems very minor, and if we wanted to fix it later I think we could.

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Jul 12, 2011, at 12:02 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> Yeah, I think you're right here. It's probably not much of a practical
> concern.
>
> I was slightly bothered because it seemed a little unpredictable. But it
> seems very minor, and if we wanted to fix it later I think we could.

Yes, I agree. I think there are a number of things we could possibly fine-tune, but it's not clear to me just yet which
onesare really problems or what the right solutions are.  I think once the basic patch is in and people start beating
onit we'll get a better feeling for which parts can benefit from further engineering. 

...Robert

Re: reducing the overhead of frequent table locks, v4

From
Jeff Davis
Date:
On Tue, 2011-07-12 at 13:32 -0500, Robert Haas wrote:
> On Jul 12, 2011, at 12:02 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> > Yeah, I think you're right here. It's probably not much of a practical
> > concern.
> > 
> > I was slightly bothered because it seemed a little unpredictable. But it
> > seems very minor, and if we wanted to fix it later I think we could.
> 
> Yes, I agree. I think there are a number of things we could possibly fine-tune, but it's not clear to me just yet
whichones are really problems or what the right solutions are.  I think once the basic patch is in and people start
beatingon it we'll get a better feeling for which parts can benefit from further engineering.
 

OK, marking "ready for committer" assuming that you will take care of my
previous complaints (the biggest one is that holdsStrongLockCount should
be boolean).

Disclaimer: I have done no performance review at all, even though this
is a performance patch!

I like the patch and I like the approach. It seems like the potential
benefits are worth the extra complexity, which seems manageable and
mostly isolated to lock.c.

Regards,Jeff Davis



Re: reducing the overhead of frequent table locks, v4

From
Robert Haas
Date:
On Tue, Jul 12, 2011 at 3:24 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Tue, 2011-07-12 at 13:32 -0500, Robert Haas wrote:
>> On Jul 12, 2011, at 12:02 PM, Jeff Davis <pgsql@j-davis.com> wrote:
>> > Yeah, I think you're right here. It's probably not much of a practical
>> > concern.
>> >
>> > I was slightly bothered because it seemed a little unpredictable. But it
>> > seems very minor, and if we wanted to fix it later I think we could.
>>
>> Yes, I agree. I think there are a number of things we could possibly fine-tune, but it's not clear to me just yet
whichones are really problems or what the right solutions are.  I think once the basic patch is in and people start
beatingon it we'll get a better feeling for which parts can benefit from further engineering. 
>
> OK, marking "ready for committer" assuming that you will take care of my
> previous complaints (the biggest one is that holdsStrongLockCount should
> be boolean).
>
> Disclaimer: I have done no performance review at all, even though this
> is a performance patch!
>
> I like the patch and I like the approach. It seems like the potential
> benefits are worth the extra complexity, which seems manageable and
> mostly isolated to lock.c.

Thanks.  Committed, with minor changes based on your comments.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company