On 3/14/19 00:29, Stephan Schmidt wrote:
> i’m currently working on a high Performance Database and want to make
> sure that whenever there are slow queries during regular operations i’ve
> got all Information about the query in my logs. So auto_explain come to
> mind, but the documentation explicitly states that it Comes at a cost.
> My Question is, how big is the latency added by auto_explain in
> percentage or ms ?
One thought - what if the problem query is a 4ms query that just went to
6ms but it's executed millions of times per second? That would create a
150% increase to the load on the system.
The approach I've had the most success with is to combine active session
sampling (from pg_stat_activity) with pg_stat_statements (ideally with
periodic snapshots) to identify problematic SQL statements, then use
explain analyze after you've identified them.
There are a handful of extensions on the internet that can do active
session sampling for you, and I've seen a few scripts that can be put
into a scheduler to capture snapshots of stats tables.
Maybe something to consider in addition to the auto_explain stuff.
-Jeremy
--
http://about.me/jeremy_schneider