[GENERAL] Advisory lock deadlock issue - Mailing list pgsql-general

From David Rosenstrauch
Subject [GENERAL] Advisory lock deadlock issue
Date
Msg-id 60c8d34d-b788-6177-e3f3-5ceb52cc0a07@darose.net
Whole thread Raw
Responses Re: [GENERAL] Advisory lock deadlock issue  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-general
I'm running a Spark job that is writing to a postgres db (v9.6), using
the JDBC driver (v42.0.0), and running into a puzzling error:

2017-06-06 16:05:17.718 UTC [36661] dmx@dmx ERROR:  deadlock detected
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx DETAIL:  Process 36661 waits
for ExclusiveLock on advisory lock [16649,0,102440,5]; blocked by
process 36662.
         Process 36662 waits for ExclusiveLock on advisory lock
[16649,0,102439,5]; blocked by process 36661.

However, I can't for the life of me figure out a) how the advisory locks
are getting created (as I'm not calling for them myself), and b) how to
fix this issue.


A bit of background:  My Spark job runs as multiple processes on
multiple machines.  Each process is performing the writes to pgsql using
the jdbc driver.  The writes are performed a) as PG UPSERTS, b) as JDBC
batches, and c) using JDBC prepared statements.  So each process, when
it's time to write to the db, creates several prepared statements, adds
a bunch of UPSERTs to each prepared statement (i.e., each prepared
statement contains a batch of a few hundred UPSERTs), and then performs
an executeBatch() on each statement to perform the write.  That
executeBatch() call is where I'm running into the error.

In theory, since there's multiple processes that are issuing these
batched DB writes, there could be a record locking problem if, say, 2
processes tried to perform updates to the same user record.  But in
reality this should be impossible.  Spark partitions everything based on
a key - in my case userID - so all DB writes for the same user should be
happening in the same process.  So at worst I could just have a batch
that contains multiple UPSERTs to the same user record, but I should
never be seeing updates to the same user from different processes.


So, I'm very puzzled by that deadlock error.  Specifically:

* How could it be possible that there are 2 PG processes trying to
acquire the same lock?  Spark's partitioning should ensure that all
updates to the same user record get routed to the same process, so this
situation shouldn't even be possible.

* How/why am I winding up acquiring advisory locks in the first place?
I'm never requesting them.  I looked at the PG JDBC driver code a bit,
thinking that it might automatically be creating them for some reason,
but that doesn't seem to be the case.  Maybe the PG database itself is?
(E.g., Does the PG automatically use advisory locks with UPSERTs?)

And, last but not least:

* How do I resolve this "waits for ExclusiveLock on advisory lock"
issue?  There's precious little info available regarding exactly what
that error message is and how to solve.


Any help or pointers greatly appreciated!

Thanks,

DR


pgsql-general by date:

Previous
From: ADSJ (Adam Sjøgren)
Date:
Subject: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100
Next
From: Achilleas Mantzios
Date:
Subject: Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toastvalue 76753264 in pg_toast_10920100