Re: maximum for auto_explain.log_min_duration doesn't seem to make sense - Mailing list pgsql-bugs

From David G. Johnston
Subject Re: maximum for auto_explain.log_min_duration doesn't seem to make sense
Date
Msg-id CAKFQuwaPyFHN=cmEhOYdd2R5daZP2Wn0og9M8BEPE5k_tuQYjA@mail.gmail.com
Whole thread Raw
In response to maximum for auto_explain.log_min_duration doesn't seem to make sense  (Kevin Bloch <kev@codingthat.com>)
Responses Re: maximum for auto_explain.log_min_duration doesn't seem to makesense  (Kevin Bloch <kev@codingthat.com>)
Re: maximum for auto_explain.log_min_duration doesn't seem to make sense  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
On Fri, Feb 23, 2018 at 11:34 AM, Kevin Bloch <kev@codingthat.com> wrote:
According to https://dba.stackexchange.com/a/198429/28774 , this setting maxes out at INT_MAX / 1000, but since it's never multiplied by 1000 or any other number, it seems it should perhaps just be INT_MAX 

​I suspect that the counter to which that value is being compared also wants to be an INT and if one checks for "val > INT_MAX" then val cannot be restricted to an integer (and since we are capturing time we need some unknown buffer).

As for the post question: What can I do if I want to log even longer-running queries on a data warehouse?

The answer is "nothing special, anything running longer than the supplied value will be logged".  What you cannot do is choose not to log a subset of queries that take longer INT_MAX/1,000 and less then infinity - once you hit INT_MAX/1,000 you must log it.

David J.

pgsql-bugs by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: BUG #15083: [54000] ERROR: total size of jsonb array elementsexceeds the maximum of 268435455 bytes
Next
From: Kevin Bloch
Date:
Subject: Re: maximum for auto_explain.log_min_duration doesn't seem to makesense