On 2022-08-08 Mo 07:34, Marcos Pegoraro wrote:
>
>
> How are you running postgres? If the logger process runs into
> trouble it might
> write to stderr.
>
> Is there a chance your huge statements would make you run out of
> space?
>
> Well, I don't think it is a out of space problem, because it
> doesn´t stop logging, it just splits that message. As you can see, the
> next message is logged properly. And that statement is not so huge,
> these statements have not more than 10 or 20kb. And as I said these
> statements occur dozens of times a day, but only once or twice is
> not correctly logged
> An additional info, that splitted message has an out of order log
> time. At that time the log file was having 2 or 3 logs per second, and
> that message was 1 or 2 minutes later. It seems like it occurs now but
> it's stored a minute or two later.
>
>
It looks like a failure of the log chunking protocol, with long messages
being improperly interleaved. I don't think we've had reports of such a
failure since commit c17e863bc7 back in 2012, but maybe my memory is
failing.
What platform is this on? Is it possible that on some platform the chunk
size we're using is not doing an atomic write?
syslogger.h says:
#ifdef PIPE_BUF
/* Are there any systems with PIPE_BUF > 64K? Unlikely, but ... */
#if PIPE_BUF > 65536
#define PIPE_CHUNK_SIZE 65536
#else
#define PIPE_CHUNK_SIZE ((int) PIPE_BUF)
#endif
#else /* not defined */
/* POSIX says the value of PIPE_BUF must be at least 512, so use that */
#define PIPE_CHUNK_SIZE 512
#endif
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com