Re: Preventing deadlock on parallel backup - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Preventing deadlock on parallel backup
Date
Msg-id 23820.1473362134@sss.pgh.pa.us
Whole thread Raw
In response to Preventing deadlock on parallel backup  (Lucas <lucas75@gmail.com>)
Responses Re: Preventing deadlock on parallel backup  (Lucas <lucas75@gmail.com>)
List pgsql-hackers
Lucas <lucas75@gmail.com> writes:
> I made a small modification in pg_dump to prevent parallel backup failures
> due to exclusive lock requests made by other tasks.

> The modification I made take shared locks for each parallel backup worker
> at the very beginning of the job. That way, any other job that attempts to
> acquire exclusive locks will wait for the backup to finish.

I do not think this would eliminate the problem; all it's doing is making
the window for trouble a bit narrower.  Also, it implies taking out many
locks that would never be used, since no worker process will be touching
all of the tables.

I think a real solution involves teaching the backend to allow a worker
process to acquire a lock as long as its master already has the same lock.
There's already queue-jumping logic of that sort in the lock manager, but
it doesn't fire because we don't see that there's a potential deadlock.
What needs to be worked out, mostly, is how we can do that without
creating security hazards (since the backend would have to accept a
command enabling this behavior from the client).  Maybe it's good enough
to insist that leader and follower be same user ID, or maybe not.

There's some related problems in parallel query, which AFAIK we just have
an ugly kluge solution for ATM.  It'd be better if there were a clear
model of when to allow a parallel worker to get a lock out-of-turn.
        regards, tom lane



pgsql-hackers by date:

Previous
From: Adam Brightwell
Date:
Subject: COPY command with RLS bug
Next
From: Peter Geoghegan
Date:
Subject: Re: Is tuplesort_heap_siftup() a misnomer?