Re: Temporal Databases - Mailing list pgsql-general

From Brad Nicholson
Subject Re: Temporal Databases
Date
Msg-id 43FF3ADA.805@ca.afilias.info
Whole thread Raw
In response to Re: Temporal Databases  (Simon Riggs <simon@2ndquadrant.com>)
Responses Re: Temporal Databases  ("Jim C. Nasby" <jnasby@pervasive.com>)
List pgsql-general
Simon Riggs wrote:

>A much easier way is to start a serialized transaction every 10 minutes
>and leave the transaction idle-in-transaction. If you decide you really
>need to you can start requesting data through that transaction, since it
>can "see back in time" and you already know what the snapshot time is
>(if you record it). As time moves on you abort and start new
>transactions... but be careful that this can effect performance in other
>ways.
>
>
>

We're currently prototyping a system (still very much in it's infancy)
that uses the Slony-I shipping mechanism to build an off line temporal
system for point in time reporting purposes.  The idea being that the
log shipping files will contain only the committed inserts, updates and
deletes.  Those log files are then applied to an off line system which
has a  trigger defined on each table that re-write the statements, based
on the type of statement, into a temporally sensitive format.

If you want to get an exact point in time snapshot with this approach,
you are going to have to have timestamps on all table in your source
database that contain the exact time of the statement table.  Otherwise,
a best guess (based on the time the slony sync was generated) is the
closest that you will be able to come.

--
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



pgsql-general by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: setting LC_NUMERIC
Next
From: CG
Date:
Subject: ltree + gist index performance degrades significantly over a night