Re: postgres 7.2 features. - Mailing list pgsql-hackers

From Chris Bitmead
Subject Re: postgres 7.2 features.
Date
Msg-id 396A98A2.CC3C818@nimrod.itg.telecom.com.au
Whole thread Raw
In response to RE: postgres 7.2 features.  ("Mikheev, Vadim" <vmikheev@SECTORBASE.COM>)
Responses Re: postgres 7.2 features.
List pgsql-hackers
The bottom line is that the original postgres time-travel implementation
was totally cost-free. Actually it may have even speeded things
up since vacuum would have less work to do. Can you convince me that
triggers can compare anywhere near for performance? I can't see how.
All I'm asking is don't damage anything that is in postgres now that
is relevant to time-travel in your quest for WAL....

> With the original TT:
> 
> - you are not able to use indices to fetch tuples on time base;

Sounds not very hard to fix..

> - you are not able to control tuples life time;

From the docs... "Applications that do not want to save historical data
can sepicify a cutoff point for a relation. Cutoff points are defined by
the discard command" The command "discard EMP before "1 week"
deletes data in the EMP relation that is more than 1 week old".

> - you have to store commit time somewhere;

Ok, so?

> - you have to store additional 8 bytes for each tuple;

A small price for time travel.

> - 1 sec could be tooo long time interval for some uses of TT.

So someone in the future can implement finer grains. If time travel
disappears this option is not open.

> And, btw, what could be *really* very useful it's TT + referential integrity
> feature. How could it be implemented without triggers?

In what way does TT not have referential integrity? As long as the
system
assures that every transaction writes the same timestamp to all tuples
then
referential integrity continues to exist.

> Imho, triggers can give you much more flexible and useful TT...
> 
> Also note that TT was removed from Illustra and authors wrote that
> built-in TT could be implemented without non-overwriting smgr.

Of course it can be, but can it be done anywhere near as efficiently?

> > > It was mentioned here that triggers could be used for async
> > > replication, as well as WAL.
> >
> > Same story. Major inefficency. Replication is tough enough without
> > mucking around with triggers. Once the trigger executes you've got
> > to go and store the data in the database again anyway. Then figure
> > out when to delete it.
> 
> What about reading WAL to get and propagate changes? I don't think that
> reading tables will be more efficient and, btw,
> how to know what to read (C) -:) ?

Maybe that is a good approach, but it's not clear that it is the best.
More research is needed. With the no-overwrite storage manager there
exists a mechanism for deciding how long a tuple exists and this
can easily be tapped into for replication purposes. Vacuum could 
serve two purposes of vacuum and replicate.


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Foreign key bugs (Re: [BUGS] "New" bug?? Serious - crashes backend.)
Next
From: Chris Bitmead
Date:
Subject: Re: Storage Manager (was postgres 7.2 features.)