Ron Johnson <ron.l.johnson@cox.net> writes:
> And I strongly dispute the notion that it would only take 3 hours
> to dump/restore a TB of data. This seems to point to a downside
> of MVCC: this inability to to "page-level" database backups, which
> allow for "rapid" restores, since all of the index structures are
> part of the backup, and don't have to be created, in serial, as part
> of the pg_restore.
If you have a filesystem capable of atomic "snapshots" (Veritas offers
this I think), you *should* be able to do this fairly safely--take a
snapshot of the filesystem and back up the snapshot. On a restore of
the snapshot, transactions in progress when the snapshot happened will
be rolled back, but everything that committed before then will be there
(same thing PG does when it recovers from a crash). Of course, if you
have your database cluster split across multiple filesystems, this
might not be doable.
Note: I haven't done this, but it should work and I've seen it talked
about before. I think Oracle does this at the storage manager level
when you put a database in backup mode; doing the same in PG would
probably be a lot of work.
This doesn't help with the upgrade issue, of course...
-Doug