Re: group locking: incomplete patch, just for discussion - Mailing list pgsql-hackers

From Jeff Davis
Subject Re: group locking: incomplete patch, just for discussion
Date
Msg-id 1416303635.2998.193.camel@jeff-desktop
Whole thread Raw
In response to Re: group locking: incomplete patch, just for discussion  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: group locking: incomplete patch, just for discussion  (Amit Kapila <amit.kapila16@gmail.com>)
Re: group locking: incomplete patch, just for discussion  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Mon, 2014-11-17 at 14:32 -0500, Robert Haas wrote:
> I think I see your point, which it just so happens Amit articulated to
> me in different words while we were chatting about this problem this
> morning.  We want to avoid waiving the mutual exclusion provided by
> the lock manager only to end up re-implementing something very much
> like the current lock manager to arbitrate resource contention among
> backends within a parallel group.

Even worse, it seems like you lose composability of executor nodes.
Right now, you can rearrange executor nodes in the planner quite easily,
because they all take tuples as input and give tuples as output (with
few exceptions).

What is the input to a parallelized executor node? What is the output?
How do you hook one parallized node to another in the planner? Does it
stay parallel all the way through, or do you have to serialize between
nodes?

You can answer that for Scan, because it's a leaf node; but how will we
get to HashJoin, etc.?

>   However, we also don't want the
> mutual exclusion that the lock manager provides to serve as a barrier
> to implementing useful parallelism; that is, if the parallel backends
> want to manage conflicts themselves instead of letting the lock
> manager do it, that should be possible.

Agreed. Backends are quite generic, and they should be usable for all
kinds of things.

> To reiterate the basic problem here, if we do nothing at all about the
> lock manager, a parallel backend can stall trying to grab an
> AccessExclusiveLock that the user backend alread holds, either because
> the user backend holds an AccessExclusiveLock as well, or because some
> other process is waiting for one, we'll deadlock and not notice.

My feeling is that we should keep the concept of a group and group
leader from your patch, and improve the deadlock detector to make use of
that information (looking at the code, it looks doable but not trivial).
But unless I am missing something, we should separate out the lock
sharing, and defer that until later.

I am actually starting to see that something like lock sharing could be
useful. I think your example of VACUUM (in the other thread) is a good
one. But I don't see anything forcing us to decide now if we can instead
detect the deadlocks and abort. We will have a lot more information
later, and I think we'll make a better decision about the exact form it
takes.

In other words: lock groups is important, but I don't see the rush for
lock sharing specifically.

Regards,Jeff Davis





pgsql-hackers by date:

Previous
From: Ashutosh Bapat
Date:
Subject: Re: postgres_fdw behaves oddly
Next
From: Simon Riggs
Date:
Subject: Re: Add shutdown_at_recovery_target option to recovery.conf