On Tue, 11 Jun 2024 at 17:03, hubert depesz lubaczewski <depesz@depesz.com> wrote:
On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote: > to be called from ~50 triggers? or any other better approach exists to > handle this?
pgaudit extension?
Or just write all the changes to single table?
Or use dynamic queries that will build the insert based on the name of table the event happened on?
Or pass arguments?
Best regards,
depesz
Thank you so much. I hope you mean something as below when you say making it dynamic. Because we have the audit tables having more number of columns as compared to the source table and for a few the column name is a bit different.
-- Trigger for deletes CREATE TRIGGER before_delete BEFORE DELETE ON source_table FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table1 CREATE TRIGGER before_delete_source_table1 BEFORE DELETE ON source_table1 FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table2 CREATE TRIGGER before_delete_source_table2 BEFORE DELETE ON source_table2 FOR EACH ROW EXECUTE FUNCTION log_deletes();
CREATE OR REPLACE FUNCTION log_deletes() RETURNS TRIGGER AS $$ BEGIN IF TG_TABLE_NAME = 'source_table1' THEN INSERT INTO delete_audit1 ( col1, col2, col3) VALUES (OLD.col1, OLD.col2, OLD.col3); ELSIF TG_TABLE_NAME = 'source_table2' THEN INSERT INTO delete_audit2 ( col4, col5, col6) VALUES (OLD.col4, OLD.col5, OLD.col6); -- Add more conditions for other tables
Dear god, no.
Since all the functions are going to be similar, I'd write a shell script to generate all the triggers, one per relevant. If you're going to record every field, then save effort, and don't bother enumerating them. You'll need to dig into the PG catalog's guts to list columns in the correct order, but Google and Stack Exchange makes that easy enough.
(And, of course, that single trigger would be SLOW.)
This is essentially what we did 25 years ago to "logically replicate" data from our OLTP system to the OLAP system. There were two log tables for every table to be replicated: foo_LOG1 and foo_LOG2. The trigger wrote to foo_LOG1 on even days, and foo_LOG2 on odd days. It even added a current_timestamp column, and action_code ("I" for insert, "D" for delete, and "U" for update).
At around 01:00, a batch job copied out all of "yesterday's" log data (there were 80-90 tables), and then truncated the table.