Thread: "Not safe to send CSV data" message
So I went to investigate bug #5196: turned on log_destination = csvlog etc, and restarted the postmaster. I got this on stderr: 2009-11-18 20:08:52.104 EST : : LOG: loaded library "passwordcheck" Not safe to send CSV data The first line is a consequence of having still got shared_preload_libraries set from testing the new contrib/passwordcheck module. However, what I'm on the warpath about right now is the second line. That message is useless, not to mention not conformant to our style guidelines. And the fact that it comes out at all suggests that the csvlog startup logic is rather broken. Comments? regards, tom lane
Andrew Dunstan <andrew@dunslane.net> writes: > So the logger there has been doing CSV logging for quite a while without > memory ballooning. I was able to generate a noticeable leak by cranking log_rotation_size way down ... it's about 1K per size rotation event. regards, tom lane
Andrew Dunstan <andrew@dunslane.net> writes: > Tom Lane wrote: >> And the fact that it comes out at all suggests that >> the csvlog startup logic is rather broken. Comments? > Not sure why you say that. This can only happen very early in the > startup process before the postmaster has had a chance to finish setting > up the syslogger process and dup the pipes. As soon as that happens > redirection_done is set to true and this message is no longer possible. Well, in that case the code is operating as designed and the bleating is simply inappropriate. What I was wondering was whether we should try to launch the syslogger before we do process_shared_preload_libraries(). But now that I think about it, I think that ordering was intentional on the grounds that some types of monitoring plugins might want to be live in all postmaster children including the syslogger. In any case there will certainly always be *some* postmaster messages that could be emitted after setting the log_destination GUC and before launching the syslogger child. If the designed behavior is that we dump to stderr during that interval, we should just do it, without the useless and confusing bleating. regards, tom lane
Tom Lane wrote: > In any case > there will certainly always be *some* postmaster messages that could > be emitted after setting the log_destination GUC and before launching > the syslogger child. If the designed behavior is that we dump to > stderr during that interval, we should just do it, without the useless > and confusing bleating. > > > I'm fine with that. I don't remember whether I put that in or whether it came from the original patch author(s). Either way, I assume the reason was to explain why the message appeared on stderr rather than the CSVlog. Now we have a couple of years of experience with CSVlog I agree it's not needed (if it were we'd probably have had more complaints like yours anyway). cheers andrew
Tom Lane wrote: > So I went to investigate bug #5196: turned on log_destination = csvlog > etc, and restarted the postmaster. I got this on stderr: > > 2009-11-18 20:08:52.104 EST : : LOG: loaded library "passwordcheck" > Not safe to send CSV data > > The first line is a consequence of having still got > shared_preload_libraries set from testing the new contrib/passwordcheck > module. However, what I'm on the warpath about right now is the second > line. That message is useless, not to mention not conformant to our > style guidelines. (donning armor) Possibly. > And the fact that it comes out at all suggests that > the csvlog startup logic is rather broken. Comments? > > Not sure why you say that. This can only happen very early in the startup process before the postmaster has had a chance to finish setting up the syslogger process and dup the pipes. As soon as that happens redirection_done is set to true and this message is no longer possible. My memory is fading a bit about this now, but ISTR we debated the logic some when I was working to get this committed around 2 1/2 years ago. It's quite possible we (or I) got it wrong, but I'm not sure what we should be doing if the logger isn't ready yet. BTW, re that bug, I recently turned on CSV logging under 8.3 for a client running Suse 10.2 64bit, with fairly heavy usage. Here is the current state of the non-client processes: postgres 7080 0.0 0.4 12913224 267216 ? S Oct28 2:58 /usr/bin/postmaster -D /pg_database/pg_data1/pg postgres 7087 0.1 0.0 99824 60696 ? Ss Oct28 30:32 postgres: logger process postgres 7092 0.0 16.3 12919564 10791472 ? Ss Oct28 7:41 postgres: writer process postgres 7093 0.0 0.0 12919256 10432 ? Ss Oct28 0:29 postgres: wal writer process postgres 7094 0.0 0.0 12920892 3544 ? Ss Oct28 0:00 postgres: autovacuum launcher process postgres 7095 0.0 0.0 34396 1904 ? Ss Oct28 0:05 postgres: archiver process last was 0000000100000205000000A3 postgres 7096 0.0 0.0 36156 2916 ? Ss Oct28 10:26 postgres: stats collector process So the logger there has been doing CSV logging for quite a while without memory ballooning. (The writer process, on the other hand, has eaten up a boatload of memory, by the look of it) cheers andrew
On Thu, Nov 19, 2009 at 3:50 AM, Andrew Dunstan <andrew@dunslane.net> wrote: > I'm fine with that. I don't remember whether I put that in or whether it > came from the original patch author(s). Either way, I assume the reason was > to explain why the message appeared on stderr rather than the CSVlog. Now we > have a couple of years of experience with CSVlog I agree it's not needed (if > it were we'd probably have had more complaints like yours anyway). ISTM the danger is that someone looks at the regular logs and isn't aware that some messages went to someplace else. In which case bleating t to the someplace else is unhelpful. Perhaps it would be more useful if it set a flag and then once the regular logs are set up you output a regular warning that some errors were generated prior to switching and were sent to stderr. -- greg
Greg Stark <gsstark@mit.edu> writes: > ISTM the danger is that someone looks at the regular logs and isn't > aware that some messages went to someplace else. In which case > bleating to the someplace else is unhelpful. Yes, that's my problem with it in a nutshell. Anybody who is smart enough to look at the original stderr doesn't need the warning; all it does is distract from the real messages. > Perhaps it would be more useful if > it set a flag and then once the regular logs are set up you output a > regular warning that some errors were generated prior to switching and > were sent to stderr. That'd be nice, but I'm unconvinced that it is feasible. The uncaught messages might have come from subprocesses, or from library code that isn't polite enough to go through elog. We go to a lot of trouble to be able to capture all such traffic once the logger process is alive. Pretending that we can tell you about traffic we didn't capture seems to me to be likely to instill a false sense of confidence. regards, tom lane