Re: [ADMIN] large database: problems with pg_dump and pg_restore - Mailing list pgsql-admin

From Martin Povolny
Subject Re: [ADMIN] large database: problems with pg_dump and pg_restore
Date
Msg-id E1PB1rW-0005Bs-De@ns.solnet.cz
Whole thread Raw
In response to Re: large database: problems with pg_dump and pg_restore  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-admin
27.10.2010 tgl@sss.pgh.pa.us napsal(a):
> Martin Povolny <martin.povolny@solnet.cz> writes:
>> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I
>> was unable to
>> make a dump in the default 'tar' format. I got this message:
>> pg_dump: [tar archiver] archive member too large for tar format
>
> This is expected: tar format has a documented limit of 8GB per table.
> (BTW, tar is not the "default" nor the recommended format, in part
> because of that limitation. The custom format is preferred unless
> you really *need* to manipulate the dump files with "tar" for some
> reason.)


Ok, I get it. Don't use the 'tar' format. I will not. 

As to hitting the limit of 8 GB per table  -- I have one really large table. 

But if I dump the table separetely, I get:

pg_dump --verbose --host localhost --username bb --create --format tar --file archiv5-process.dump --table process archiv5

-rw-r--r-- 1 root root 4879763968 2010-10-27 10:15 archiv5-process.dump

in other words: I am sure I did not hit the 8GB per table limit. But I am over 4GB per table.

The 'process' table is the largest and is also the one where restore fails in both cases (tar format and custom format).

>
>> for the bb.dump in the 'custom' format:
>> pg_restore: [vlastní archivář] unexpected end of file
>
> Hm, that's weird. I can't think of any explanation other than the dump
> file somehow getting corrupted. Do you get sane-looking output if you
> run "pg_restore -l bb.dump"?

Sure, I did pg_restore -l into a file and did not get any errors.

Then I commented out the already restored files and then tried restoring tables behind the table 'process'.

But I got the same error message :-(

like this:

$ /usr/lib/postgresql/8.4/bin/pg_restore -l bb.dump > bb.list

# then edit bb.list, commenting out lines before and including table 'process', saving into bb.list-post-process

$ /usr/lib/postgresql/8.4/bin/pg_restore --verbose --use-list bb.list-post-process bb.dump > bb-list-restore.sql
pg_restore: restoring data for table "process_internet"
pg_restore: [custom archiver] unexpected end of file
pg_restore: *** aborted because of error

As to splitting the dump as suggested earlier in this thread -- I am sure my system can work with files over 4 GB also I don't understand how spliting the output from pg_dump would prevent the pg_dump from failing. But I can try that too.

Also I did not try the '-F plain' dump format. 

I have stopped using the plain format in the past because I was getting output as if I used --inserts atlhough I did not and I don't see any option for pg_dump, that would force the use of COPY for dumping data. But that is several versions of postgres back and I did not try this since that time.

Many thanks for your time and tips!

--
Mgr. Martin Povolný, soLNet, s.r.o.,
+420777714458, martin.povolny@solnet.cz

pgsql-admin by date:

Previous
From: "mark"
Date:
Subject: Re: large database: problems with pg_dump and pg_restore
Next
From: "Jehan-Guillaume (ioguix) de Rorthais"
Date:
Subject: Re: large database: problems with pg_dump and pg_restore