Stephen Frost <sfrost@snowman.net> writes:
> * Tom Lane (tgl@sss.pgh.pa.us) wrote:
>> I think the extra representational overhead is already a good reason to
>> say "no". There is not any production scenario I've ever heard of where
>> log output volume isn't a consideration.
> The flip side is that there are absolutely production cases where what
> we output is either too little or too much- being able to control that
> and then have the (filtered) result in JSON would be more-or-less
> exactly what a client of ours is looking for.
> To try to clarify that a bit, as it comes across as rather opaque even
> on my re-reading, consider a case where you can't have the
> "credit_card_number" field ever exported to an audit or log file, but
> you're required to log all other changes to a table. Then consider that
> such a situation extends to individual INSERT or UPDATE commands- you
> need the command logged, but you can't have the contents of that column
> in the log file.
Hmm ... that's a lovely use-case, but somehow I don't find "let's output
in JSON" to be a credible solution. Not only are you paying extra output
log volume to do that, but you are also supposing that some downstream
process is going to parse the JSON, remove some data (which JSON-izing
the log format didn't especially help with, be honest), and re-emit JSON
afterwards. There's no way that that scales to any production situation,
even if you had parallel log-filtering processes which you won't.
I am interested in thinking about your scenario; I just don't think
that JSON output format is any real part of the answer.
regards, tom lane