On Mon, Jul 23, 2018 at 5:14 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> I think the case I am talking about is somewhere between these two -
> the backend which has imprecisely decided that there's a deadlock will
> start taking exclusive locks and will have to wait for all the
> backends with shared locks to release there locks. This will never
> happen if there is a stream of backends which keeps on taking shared
> locks. This is classical problem of starvation because of lock
> upgrade. Taking exclusive lock to begin with avoids this problem.
I don't quite see how this could happen in this case. IIUC, if a
backend takes the shared locks and finds that there is a deadlock, it
will then take exclusive locks. If it takes the shared locks and
finds that there is no deadlock, then it will just wait to be woken
up. After some small fraction of a second, every backend in the
system that was taking short locks will have taken them and either
started waiting for an exclusive lock or on the process semaphore.
It's not like ProcArrayLock or CLogControlLock where there can be an
endless stream of new requests.
>>> /*
>>> * And release locks. We do this in reverse order for two reasons: (1)
>>> * Anyone else who needs more than one of the locks will be trying to lock
>>> * them in increasing order; we don't want to release the other process
>>> * until it can get all the locks it needs.
>
> Your patch is breaking this assumption. We may end up in a situation
> where more than two backends have shared locks. One of the backend
> starts releasing its locks and releases even the first lock. Some
> different backend gets its first exclusive lock because of that but
> can not get the next one since it's held by the second backend holding
> shared locks. This might not happen if by "more than one" the comment
> means all the locks.
I don't see the problem. We can't get the first exclusive lock until
all processes have released their shared locks, and since they release
in decreasing order, nobody holds any lock at that point. Unless I'm
confused?
I think this patch is probably a good idea in some form. Whether it's
got all the details exactly right I'm not sure.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company