Re: pg_dump --snapshot - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: pg_dump --snapshot
Date
Msg-id 20130507160517.GY4361@tamriel.snowman.net
Whole thread Raw
In response to Re: pg_dump --snapshot  (Greg Stark <stark@mit.edu>)
List pgsql-hackers
Greg,

* Greg Stark (stark@mit.edu) wrote:
> One natural way to do it would be to make an option to pg_dump which
> caused it to do all the normal pre-dump things it would normally do,
> then export a snapshot and wait for the user. (Alternately it could
> even create a prepared transaction which iirc keeps the locks until
> it's committed). That gives users a way to get a snapshot that is
> guaranteed to work until that transaction is exited.

Right, that was one of the suggestions that I made up-thread a bit.

> But I don't think that would really make users happy. I think the
> usual use case for this  feature would be to dump a single table or
> small number of tables as of some time in the past that they didn't
> plan in advance that they would need. They might have a cron job
> periodically export a snapshot "just in case" and then want to use it
> later.

The snapshot has to be 'alive' in order to be joined, so unless we're
changing something else, I don't think this cronjob approach would
actually work (unless it forks off and keeps the transaction open, but
I didn't read that from what you wrote..).

> They wouldn't be happy if they had to create a prepared
> transaction for each such snapshot which locked every table in their
> database until they decide they don't actually need it. That would
> mean they could never do any ddl.

If they don't lock the table then any DDL they do would break the backup
*anyway*.  This is a pretty clear example of exactly the problem with
simply adding this option to pg_dump..  The view of pg_class would be
from when the snapshot was taken, but without locks, anything could have
changed between then and when the backup tries to run.

> The use case of wanting to dump a single table as of a few hours ago
> (e.g. before some application data loss bug) is pretty compelling. If
> we could do it it I think it would be worth quite a bit.

I certainly like the idea, but it seems rather beyond what Simon was
after with this patch, aiui, and does come with some other concerns like
dealing with what happens if the table was modified after the snapshot
was taken (possible because there wasn't a lock on it).

> What's the worst case for using an old snapshot? If I try to access a
> table that doesn't exist any longer I'll get an error. That doesn't
> really seem that bad for the use case I described. It's worse for the
> full table dump but for an explicit list of tables, eh. Seems ok to
> me.

I'm not convinced that risk of error is the only issue with this..  What
if a column was dropped and then a new one put in its place (with the
same name)?

> If I try to access a table whose schema has changed then I might use
> the wrong tupledesc  and see rows that don't decode properly. That
> would be a disaster. Can we protect against that by noticing that the
> pg_class row isn't visible to our snapshot and throw an error? Would
> that be sufficient to protect against all schema changes? Would it
> cause massive false positives based on whether vacuum had happened to
> have run recently?

The way this is handled now is that we use syscache to get whatever the
current view of the object is when we're running queries against it.
The problem is that the 'current' view ends up being different from what
we see in pg_class / pg_attribute.  Andres provided a good example which
illustrates this upthread.
Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Greg Stark
Date:
Subject: Re: pg_dump --snapshot
Next
From: Peter Geoghegan
Date:
Subject: Re: XLogFlush invoked about twice as many times after 9.2 group commit enhancement