Thread: Dump large DB and restore it after all.

Dump large DB and restore it after all.

From
Condor
Date:
Hello ppl,
can I ask how to dump large DB ? I read documentation but I has a
problem with split that was year ago and did not use it after then.
Problem was when I start: pg_dump dbname | split -b 1G - filename I
unable to restore it correct. When I start restore DB i got error from
sql he did not like one line. I make investigation and the problem was
in last line of first file value field was something like '"This is a '
and here file over. I added single quotes in the example that I can pay
the phrase to make it clear what I mean. In next file sentence was end
correct 'simple test"' (also without single quotes) and this was not
inserted into db. I use for now gzip but I don't know how safe is that,
because when I check db size uncompressed is 5G and pg_database_size
show me 10 G



--
Regards,
Condor

Re: Dump large DB and restore it after all.

From
Christian Ullrich
Date:
* Condor wrote:

> Problem was when I start: pg_dump dbname | split -b 1G - filenameI
> unable to restore it correct. When I start restore DB i got error from
> sql he did not like one line. I make investigation and the problem was
> in last line of first file value field was something like '"This is a '
> and here file over. I added single quotes in the example that I can pay
> the phrase to make it clear what I mean. In next file sentence was end
> correct 'simple test"' (also without single quotes) and this was not

psql does not know that you split the file. You must send the whole dump
to psql at once, not piece by piece.

Instead of

    psql dbname < filename.1
    psql dbname < filename.2

do this:

    cat filename.1 filename.2 | psql dbname

--
Christian


Re: Dump large DB and restore it after all.

From
Craig Ringer
Date:
On 5/07/2011 5:00 PM, Condor wrote:
> Hello ppl,
> can I ask how to dump large DB ?

Same as a smaller database: using pg_dump . Why are you trying to split
your dumps into 1GB files? What does that gain you?

Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried about
running a database server on it.

As for gzip: gzip is almost perfectly safe. The only downside with gzip
is that a corrupted block in the file (due to a hard
disk/dvd/memory/tape error or whatever) makes the rest of the file,
after the corrupted block, unreadable. Since you shouldn't be storing
your backups on anything that might get corrupted blocks, that should
not be a problem. If you are worried about that, you're better off still
using gzip and using an ECC coding system like par2 to allow recovery
from bad blocks. The gzipd dump plus the par2 file will be smaller than
the uncompressed dump, and give you much better protection against
errors than an uncompressed dump will.

To learn more about par2, go here:

   http://parchive.sourceforge.net/

--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088     Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

Re: Dump large DB and restore it after all.

From
Condor
Date:
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
> On 5/07/2011 5:00 PM, Condor wrote:
>> Hello ppl,
>> can I ask how to dump large DB ?
>
> Same as a smaller database: using pg_dump . Why are you trying to
> split your dumps into 1GB files? What does that gain you?
>
> Are you using some kind of old file system and operating system that
> cannot handle files bigger than 2GB? If so, I'd be pretty worried
> about running a database server on it.

Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.

>
> As for gzip: gzip is almost perfectly safe. The only downside with
> gzip is that a corrupted block in the file (due to a hard
> disk/dvd/memory/tape error or whatever) makes the rest of the file,
> after the corrupted block, unreadable. Since you shouldn't be storing
> your backups on anything that might get corrupted blocks, that should
> not be a problem. If you are worried about that, you're better off
> still using gzip and using an ECC coding system like par2 to allow
> recovery from bad blocks. The gzipd dump plus the par2 file will be
> smaller than the uncompressed dump, and give you much better
> protection against errors than an uncompressed dump will.
>
> To learn more about par2, go here:
>
>   http://parchive.sourceforge.net/


Thank you for info.

> --
> Craig Ringer
>

--
Regards,
Condor

Re: Dump large DB and restore it after all.

From
Tomas Vondra
Date:
Dne 5.7.2011 13:31, Condor napsal(a):
> On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
>> On 5/07/2011 5:00 PM, Condor wrote:
>>> Hello ppl,
>>> can I ask how to dump large DB ?
>>
>> Same as a smaller database: using pg_dump . Why are you trying to
>> split your dumps into 1GB files? What does that gain you?
>>
>> Are you using some kind of old file system and operating system that
>> cannot handle files bigger than 2GB? If so, I'd be pretty worried
>> about running a database server on it.
>
> Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
> truncated.

AFAIK the ext3 filesystem should handle at least 16GB files, depending
on the block size. And I don't think pg_dump truncates the file.

Tomas

Re: Dump large DB and restore it after all.

From
John R Pierce
Date:
On 07/05/11 4:31 AM, Condor wrote:
>>
>> Are you using some kind of old file system and operating system that
>> cannot handle files bigger than 2GB? If so, I'd be pretty worried
>> about running a database server on it.
>
> Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
> truncated.

did you do this dump to a NFS or SMB/CIFS mounted network file system or
something?




--
john r pierce                            N 37, W 122
santa cruz ca                         mid-left coast