Re: Dump large DB and restore it after all. - Mailing list pgsql-general

From Condor
Subject Re: Dump large DB and restore it after all.
Date
Msg-id ae75615f39f1f8f78cfefef707ec48ea@stz-bg.com
Whole thread Raw
In response to Re: Dump large DB and restore it after all.  (Craig Ringer <craig@postnewspapers.com.au>)
Responses Re: Dump large DB and restore it after all.  (Tomas Vondra <tv@fuzzy.cz>)
Re: Dump large DB and restore it after all.  (John R Pierce <pierce@hogranch.com>)
List pgsql-general
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
> On 5/07/2011 5:00 PM, Condor wrote:
>> Hello ppl,
>> can I ask how to dump large DB ?
>
> Same as a smaller database: using pg_dump . Why are you trying to
> split your dumps into 1GB files? What does that gain you?
>
> Are you using some kind of old file system and operating system that
> cannot handle files bigger than 2GB? If so, I'd be pretty worried
> about running a database server on it.

Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.

>
> As for gzip: gzip is almost perfectly safe. The only downside with
> gzip is that a corrupted block in the file (due to a hard
> disk/dvd/memory/tape error or whatever) makes the rest of the file,
> after the corrupted block, unreadable. Since you shouldn't be storing
> your backups on anything that might get corrupted blocks, that should
> not be a problem. If you are worried about that, you're better off
> still using gzip and using an ECC coding system like par2 to allow
> recovery from bad blocks. The gzipd dump plus the par2 file will be
> smaller than the uncompressed dump, and give you much better
> protection against errors than an uncompressed dump will.
>
> To learn more about par2, go here:
>
>   http://parchive.sourceforge.net/


Thank you for info.

> --
> Craig Ringer
>

--
Regards,
Condor

pgsql-general by date:

Previous
From: Alexander Shulgin
Date:
Subject: Select count(*) /*from*/ table
Next
From: Geoffrey Myers
Date:
Subject: Re: out of memory error