I do option 3. A generic trigger writes the tablename, field name, type
of change (insert/update/delete) and the old and new values for columns
that were affected. It is kind of a hog, but it works very well. I
have cron delete old entries so it doesn't eat my whole disk.
I haven't tried to get it to give up the data in the same representation
as the source table, but it shouldn't be too hard with a set returning
function, such that you tell it the table name and timestamp and it
returns records as they existed at that time. I usually just query it
directly to "see what happened".
>>> Scott Frankel <leknarf@pacbell.net> 03/03/05 3:51 PM >>>
Is there a canonical form that db schema designers use
to save changes to the data in their databases?
For example, given a table with rows of data, if I UPDATE
a field in a row, the previous value is lost. If I wanted to
track the changes to my data over time, it occurs to me that
I could,
1) copy the whole row of data using the new value, thus
leaving the old row intact in the db for fishing expeditions,
posterity, &c.
-- awfully wasteful, especially with binary data
2) enter a new row that contains only new data fields, requiring
building a full set of data through heavy lifting and multiple
queries
through 'n' number of old rows
-- overly complex query design probably leading to errors
3) create a new table that tracks changes
-- the table is either wide enough to mirror all columns in
the working table, or uses generic columns and API tricks to
parse token pair strings, ...
4) other?
Thanks
Scott
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster