Thread: dump of 700 GB database

dump of 700 GB database

From
"karsten vennemann"
Date:

I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and and the split command ?
I thought something like
"pg_dump -Fc test | split -b 1000m - testdb.dump"
might work ?
Karsten
 
Terra GIS LTD
Seattle, WA, USA 
 

Re: dump of 700 GB database

From
John R Pierce
Date:
karsten vennemann wrote:
> I have to write a 700 GB large database to a dump to clean out a lot
> of dead records on an Ubuntu server with postgres 8.3.8. What is the
> proper procedure to succeed with this - last time the dump stopped at
> 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and
> and the split command ?

vacuum should clean out the dead tuples, then cluster on any large
tables that are bloated will sort them out without needing too much
temporary space.