Re: csvlog gets crazy when csv file is not writable - Mailing list pgsql-bugs

From Michael Paquier
Subject Re: csvlog gets crazy when csv file is not writable
Date
Msg-id 20180821022342.GD2897@paquier.xyz
Whole thread Raw
In response to csvlog gets crazy when csv file is not writable  (Alexander Kukushkin <cyberdemn@gmail.com>)
Responses Re: csvlog gets crazy when csv file is not writable
List pgsql-bugs
On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
> If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
> for writing, it gets mad and outputs a few thousands of lines to
> stderr:
>
> 2018-08-20 15:40:46.920 CEST [22069] PANIC:  could not open log file

Ah, this log message could be changed to be simply "could not open
file", the file name offers enough context...

> And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times
>
> Sure, it is entirely my fault, that csv file is not writable, but such
> avalanche of PANIC lines is really scary.

Yeah, this is a recursion in logfile_open -> open_csvlogfile.  With
stderr there is a much better effort, where the server just quits with a
FATAL if the log file cannot be opened in SysLogger_Start.  Could this
be an argument for allowing logfile_open() to use write_stderr?  I am
not sure under the hood of the don't-do-that rule.  And we make sure
that log_destination is writable already at early stage, which would
cover any scenarios like a kernel switching the log partition to be
read-only.
--
Michael

Attachment

pgsql-bugs by date:

Previous
From: Michael Paquier
Date:
Subject: Re: BUG #15343: Segmentation fault using pg_dump with--exclude-table if table contains identity column
Next
From: Amit Kapila
Date:
Subject: Re: BUG #15324: Non-deterministic behaviour from parallelised sub-query