On 08/20/2015 06:09 PM, Tom Lane wrote:
> Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
>> I wonder if we should have a default of capping the dump to say 1k lines
>> or such and only optionally do a full one.
>
> -1. It's worked like this for the last fifteen years or thereabouts,
> and you're the first one to complain. I suspect some weirdness in
> your logging setup, rather than any systemic problem that we
> need to lobotomize our diagnostic output in order to prevent.
not sure what you consider weird in the logging setup here - the context
dump is imho borderline on internal diagnostic output at a debug level
(rather than making sense to an average sysadmin) already (and no way to
control it). But having (like in our case) the backend dumping 2 million
basically identical lines into a general logfile per event seems
excessive and rather abusive towards the rest of the system (just from
an IO perspective for example or from a log file post processing tool
perspective)
>
> (The reason I say "lobotomize" is that there's no particularly good
> reason to assume that the first N lines will tell you what you need
> to know. And the filter rule would have to be *very* stupid, because
> we can't risk trying to allocate any additional memory to track what
> we're doing here.)
I do understand that there migt be challenges there but in the last 15
years machines got way faster and pg got way capable and some of those
capabilities might need to get revisited in that regards - and while it
is very nice that pg survives multiple oom cases pretty nicely I dont
think it is entitled to put an imho unreasonable burden on the rest of
the system by writing "insane" amounts of data.
Just from a sysadmin perspective it also means that it is trivial for a
misbehaving app to fill up the logfile on a system because unlike almost
all other actual other logging output there seems to be no way to
control/disabled it on a per role/database level.
Stefan