Re: Postgres Query Plan Live Lock - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Postgres Query Plan Live Lock
Date
Msg-id CAMkU=1ww0JTod6-UYQY2jAYFbA4tdPwYebD8FeUeLrv_kaokOg@mail.gmail.com
Whole thread Raw
In response to Re: Postgres Query Plan Live Lock  ("Pweaver (Paul Weaver)" <pweaver@panjiva.com>)
List pgsql-performance
On Wed, Feb 5, 2014 at 11:47 AM, Pweaver (Paul Weaver) <pweaver@panjiva.com> wrote:

On Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Monday, February 3, 2014, Pweaver (Paul Weaver) <pweaver@panjiva.com> wrote:
We have been running into a (live lock?) issue on our production Postgres instance causing queries referencing a particular table to become extremely slow and our application to lock up.

This tends to occur on a particular table that gets a lot of queries against it after a large number of deletes. When this happens, the following symptoms occur when queries referencing that table are run (even it we stop the deleting):

What do you mean by "stop the deleting"?  Are you pausing the delete but without either committing or rolling back the transaction, but just holding it open?  Are you stopping it cleanly, between transactions?

We are repeatedly running delete commands in their own transactions. We stop issuing new deletes and let them finish cleanly. 

Also, how many queries are happening concurrently?  Perhaps you need a connection pooler.
Usually between 1 and 20. When it gets locked up closer to 100-200.
We should add a connection pooler. Would the number of active queries on the table be causing the issue?

100 to 200 active connections cannot be helpful.  That number should not be *inherently* harmful, but certainly can be very harmful in conjunction with something else.  One thing it could be harmful in conjunction with would be contention on the PROCLOCK spinlock, but if you don't have open transactions that have touched a lot of tuples (which it sounds like you do not) then that probably isn't the case. Another thing could be kernel scheduler problems.  I think some of the early 3-series kernels had some problems with the scheduler under many concurrently active processes, which lead to high % system CPU time.  There are also problems with NUMA, and with transparent huge pages, from around the same kernel versions.

 

Is the CPU time user time or system time?  What kernel version do you have?
Real time - 3.2.0-26

I meant using "top" or "sar" during a lock up, is the CPU time being spent in %user, or in %system?

Unfortunately I don't know exactly when in the 3-series kernels the problems showed up, or were fixed. 

In any case, lowering the max_connections will probably prevent you from accidentally poking the beast, even if we can't figure out exactly what kind of beast it is.
 


 max_connections              | 600                                      | configuration file

That is quite extreme.  If a temporary load spike (like from the deletes and the hinting needed after them) slows down the select queries and you start more and more of them, soon you could tip the system over into kernel scheduler insanity with high system time.  Once in this mode, it will stay there until the incoming stream of queries stops and the existing ones clear out.  But, if that is what is occurring, I don't know why queries on other tables would still be fast.
We probably want a connection pooler anyways, but in this particular case, the load average is fairly low on the machine running Postrgres.

Is the load average low even during the problem event?  

Cheers,

Jeff

pgsql-performance by date:

Previous
From: Claudio Freire
Date:
Subject: Re: Bloated tables and why is vacuum full the only option
Next
From: "Katharina Koobs"
Date:
Subject: increasing query time after analyze