On Thu, Mar 14, 2019 at 3:29 AM Stephan Schmidt <schmidt@dltmail.de> wrote:
Hello,
i’m currently working on a high Performance Database and want to make sure that whenever there are slow queries during regular operations i’ve got all Information about the query in my logs. So auto_explain come to mind, but the documentation explicitly states that it Comes at a cost. My Question is, how big is the latency added by auto_explain in percentage or ms ?
You will have to measure it yourself and see. It depends on your hardware, OS, and OS version, and PostgreSQL version. And the nature of your queries. If you have auto_explain.log_timing=on, then I find that large sorts are the worst impacted. So if you have a lot of those, you should be careful.
On older kernels, I would run with auto_explain.log_timing=off. On newer kernels where you can read the clock from user-space, I run with auto_explain.log_timing=on. I find the slowdown noticeable with careful investigation (around 3%, last time I carefully investigated it), but usually well worth paying to have actual data to work with when I find slow queries in the log. I made a special role with auto_explain disabled for use with a few reporting queries with large sorts, both to circumvent the overhead and to avoid spamming the log with slow queries I already know about.