On Tue, Aug 27, 2013 at 04:14:27PM +0200, Andres Freund wrote:
> On 2013-06-09 17:25:59 -0400, Noah Misch wrote:
> > ***************
> > *** 846,851 **** exec_simple_query(const char *query_string)
> > --- 847,856 ----
> >
> > TRACE_POSTGRESQL_QUERY_START(query_string);
> >
> > + #ifdef USE_VALGRIND
> > + VALGRIND_PRINTF("statement: %s\n", query_string);
> > + #endif
> > +
>
> Is there a special reason for adding more logging here? I find this
> makes the instrumentation much less useful since reports easily get
> burried in those traces. What's the advantage of doing this instead of
> log_statement=...? Especially as that location afaics won't help for the
> extended protocol?
I typically used "valgrind --log-file=...". To determine via log_statement
which SQL statement caused a particular Valgrind error, I would match by
timestamp; this was easier. In retrospect, log_statement would have sufficed
given both Valgrind and PostgreSQL logging to stderr. Emitting the message in
exec_simple_query() and not in exec_execute_message() is indeed indefensible.
That being said, could you tell me more about your workflow where the extra
messages cause a problem? Do you just typically diagnose each Valgrind error
without first isolating the pertinent SQL statement?
Thanks,
nm
--
Noah Misch
EnterpriseDB http://www.enterprisedb.com