Re: Locks analysis after-the-fact - Mailing list pgsql-general

From Merlin Moncure
Subject Re: Locks analysis after-the-fact
Date
Msg-id CAHyXU0wvz-FzHXEyCpU6xRV4O=KBwmvx6ZTCd3oQm15e9smnNA@mail.gmail.com
Whole thread Raw
In response to Locks analysis after-the-fact  (Olivier Macchioni <olivier.macchioni@wingo.ch>)
List pgsql-general
On Fri, Apr 27, 2018 at 2:55 AM, Olivier Macchioni
<olivier.macchioni@wingo.ch> wrote:
> Hello all,
>
> PostgreSQL version 9.4.17
>
> We have a number of queries running on the same DB from many systems. Among other things, we need to INSERT / UPDATE
ona table based on external events - this is done via a pool of 5 SQL connections which process an average of 500
events/ second. The table is used by other systems as well... 
>
> We have more and more locks on this table, which prevents it to be updated as it should - and I'm trying to see how
toimprove things without setting up an external queue system just to manage those events. 
>
> I've enabled "log_lock_waits = on" in the configuration, which generates the following kind of log entries:
>
> 2018-04-27 07:48:07 CEST [1419.269] "[unknown]" xx@yy LOG:  process 1419 still waiting for ExclusiveLock on advisory
lock[16417,1653339453,2672182422,2] after 1000.166 ms 
> 2018-04-27 07:48:07 CEST [1419.270] "[unknown]" xx@yy DETAIL:  Process holding the lock: 16418. Wait queue: 1419.
> 2018-04-27 07:48:07 CEST [1419.271] "[unknown]" xx@yy STATEMENT:  SELECT pg_advisory_xact_lock(1653339453,
-1622784874);
> 2018-04-27 07:48:07 CEST [1419.272] "[unknown]" xx@yy LOG:  process 1419 acquired ExclusiveLock on advisory lock
[16417,1653339453,2672182422,2]after 1318.721 ms 
> 2018-04-27 07:48:07 CEST [1419.273] "[unknown]" xx@yy STATEMENT:  SELECT pg_advisory_xact_lock(1653339453,
-1622784874);
>
> I can easily identify the session owner of the different processes, but the process holding the lock belong to an ERP
whichgenerates a lot of different kind of queries - I'd like to identify precisely which one is causing this
long-lastinglock. 
>
> It doesn't look like this is possible via the PostgreSQL logging features, and doing the usual lock troubleshooting
"onthe spot" using pg_locks is not practical due to the short duration and odd timing of those locks. 
>
> Does anyone have an idea on how to process in such a case?

Here, the locking query is obvious: "SELECT pg_advisory_xact_lock".
If this is an ERP application I bet advisory locks are being used to
simulate pessimistic locking in some way.   Unfortunately this is no
help as the real operation is elsewhere in this transaction.   Echoing
the advice above you need to enable full statement logging and make
sure your log_line_prefix contains pid in some easily searched for way
such as [%p].  Once you have that, you can trace back in the file
based on the locking pid and get to the bottom of the situation pretty
easily.

Be mindful statement logging, it can consume disk space really quickly
and add a lot of overhead to the database.

merlin


pgsql-general by date:

Previous
From: Andrew Edenburn
Date:
Subject: RE: [EXTERNAL] Re: Postgres PAF setup
Next
From: Rob Sargent
Date:
Subject: Surprised by index choice for count(*)