Re: Detecting File Damage & Inconsistencies - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: Detecting File Damage & Inconsistencies
Date
Msg-id CANP8+jJKNP6Js9AOZU0PhZJQPb+4vupywXnop=xQuTDF2=bN7g@mail.gmail.com
Whole thread Raw
In response to RE: Detecting File Damage & Inconsistencies  ("tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com>)
Responses Re: Detecting File Damage & Inconsistencies
Re: Detecting File Damage & Inconsistencies
List pgsql-hackers
On Fri, 13 Nov 2020 at 00:50, tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
>
> From: Simon Riggs <simon@2ndquadrant.com>
> > If a rogue user/process is suspected, this would allow you to identify
> > more easily the changes made by specific sessions/users.
>
> Isn't that kind of auditing a job of pgAudit or log_statement = mod?  Or, does "more easily" mean that you find
pgAuditcomplex to use and/or log_statement's overhead is big?
 

Well, I designed pgaudit, so yes, I think pgaudit is useful.

However, pgaudit works at the statement level, not the data level. So
using pgaudit to locate data rows that have changed is fairly hard.

What I'm proposing is an option to add 16 bytes onto each COMMIT
record, which is considerably less than turning on full auditing in
pgaudit. This option would allow identifying data at the row level, so
you could for example find all rows changed by specific sessions.
Also, because it is stored in WAL it will show updates that might no
longer exist in the database because the changed row versions might
have been vacuumed away. So pgaudit will tell you that happened, but
having extra info in WAL is important also.

So thank you for the question because it has allowed me to explain why
it is useful and important.

-- 
Simon Riggs                http://www.EnterpriseDB.com/



pgsql-hackers by date:

Previous
From: Bharath Rupireddy
Date:
Subject: Re: Use standard SIGHUP and SIGTERM handlers in autoprewarm module
Next
From: Daniel Gustafsson
Date:
Subject: Misc typos