Re: Re: Backups WAS: 2 gig file size limit - Mailing list pgsql-general

From Joseph Shraibman
Subject Re: Re: Backups WAS: 2 gig file size limit
Date
Msg-id 3B4A538F.8C901C72@selectacast.net
Whole thread Raw
In response to Re: [HACKERS] 2 gig file size limit  (Lamar Owen <lamar.owen@wgcr.org>)
Responses Re: Re: Backups WAS: 2 gig file size limit  (Mike Castle <dalgoda@ix.netcom.com>)
Re: Re: Backups WAS: 2 gig file size limit  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Doug McNaught wrote:
>
> [HACKERS removed from CC: list]
>
> Joseph Shraibman <jks@selectacast.net> writes:
>
> > Doing a dumpall for a backup is taking a long time, the a restore from
> > the dump files doesn't leave the database in its original state.  Could
> > a command be added that locks all the files, quickly tars them up, then
> > releases the lock?
>
> As I understand it, pg_dump runs inside a transaction, so the output
> reflects a consistent snapshot of the database as of the time the dump
> starts (thanks to MVCC); restoring will put the database back to where
> it was at the start of the dump.
>
In theory.

> Have you observed otherwise?

Yes.  Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

And I suspect any statistics generated by VACUUM ANALYZE are lost.

If a database got corrupted somehow in order to restore from the dump
the database would have to delete the original database then restore
from the dump.  Untarring would be much easier (especially as the
database grows).  Obviously this won't replace dumps but for quick
backups it would be great.

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio.  http://www.targabot.com

pgsql-general by date:

Previous
From: Doug McNaught
Date:
Subject: Re: Re: Backups WAS: 2 gig file size limit
Next
From: Mike Castle
Date:
Subject: Re: Re: Backups WAS: 2 gig file size limit