Stephen Frost wrote:
> The flip side is that there are absolutely production cases where what
> we output is either too little or too much- being able to control that
> and then have the (filtered) result in JSON would be more-or-less
> exactly what a client of ours is looking for.
My impression is that the JSON fields are going to be more or less
equivalent to the current csvlog columns (what else could it be?). So
if you can control what you give your auditors by filtering by
individual JSON attributes, surely you could count columns in the
hardcoded CSV definition we use for csvlog just as well.
> To try to clarify that a bit, as it comes across as rather opaque even
> on my re-reading, consider a case where you can't have the
> "credit_card_number" field ever exported to an audit or log file, but
> you're required to log all other changes to a table. Then consider that
> such a situation extends to individual INSERT or UPDATE commands- you
> need the command logged, but you can't have the contents of that column
> in the log file.
It seems a bit far-fetched to think that you will be able to rip out
parts of queries by applying JSON operators to the query text. Perhaps
your intention is to log queries using something similar to the JSON
blobs I'm using the DDL deparse patch?
> Our current capabilities around logging and auditing are dismal and
> extremely frustrating when faced with these kinds of, quite real,
> requirements. I'll be in an internal meeting more-or-less all day
> tomorrow discussing auditing and how we might make things easier for
> organizations which have these requirements- would certainly welcome any
> thoughts in that direction.
My own thought is: JSON is good, but sadly it doesn't cure cancer.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services