Gregory Stark wrote:
> "Zeugswetter Andreas ADI SD" <ZeugswetterA@spardat.at> writes:
>
>>> First we must run the query in serializable mode and replace
>>> the snapshot with a synthetic one, which defines visibility
>>> at the start of the desired transaction
>> We could use something that controls "global xmin".
>> It would ensure, that global xmin does not advance bejond
>> what still needs to be visible. This would probably be a
>> sliding time window, or a fixed point in time that is
>> released by the dba/user.
>
> Well there's another detail you have to cover aside from rolling back your
> xmin. You have to find the rest of the snapshot including knowing what other
> transactions were in-progress at the time you want to flash back to.
>
> If you just roll back xmin and set xmax to the same value you'll get a
> consistent view of the database but it may not match a view that was ever
> current. That is, some of the transactions after the target xmin may have
> committed before that xmin. So there was never a time in the database when
> they were invisible but your new xmin was visible.
>
>[...]
> Incidentally this is one of the things that would be useful for read-only
> access to PITR warm standby machines.
>
Couldn't you define things simply to be that you get a consistent view
including all transactions started before x transaction? This is time
travel lite, but low overhead which I think is a key benefit of this
approach.
A huge value for this would be in the oops, I deleted my data category.
Postgresql rarely looses data, but clients seem to have a habit of doing
so, and then going oops. This seems to happen most often when facing
something like a reporting deadline where they are moving lots of stuff
around and making copies and sometimes delete the wrong "company"
recordset or equivalent, even with confirmation dialogs at the app level.
This would give a quick and easy oops procedure to the client. DBA set's
guc to 1hr, tells client, if you make a big mistake, stop database
server as follows and call. Frankly, would bail a few DBA's out as well.
The key is how lightweight the setup could be, which matters because
clients are not always willing to pay for a PITR setup. The low overhead
would mean you'd feel fine about setting guc to 1hr or so.
As a % of total installed instances I suspect the % with PITR is small.
I've got stuff I snapshot nightly, but that's it. So don't have an easy
out from the oops query either.
- August