Re: dump of 700 GB database - Mailing list pgsql-general

From John R Pierce
Subject Re: dump of 700 GB database
Date
Msg-id 4B7261CF.4070202@hogranch.com
Whole thread Raw
In response to dump of 700 GB database  ("karsten vennemann" <karsten@terragis.net>)
List pgsql-general
karsten vennemann wrote:
> I have to write a 700 GB large database to a dump to clean out a lot
> of dead records on an Ubuntu server with postgres 8.3.8. What is the
> proper procedure to succeed with this - last time the dump stopped at
> 3.8 GB size I guess. Should I combine the -Fc option of pg_dump and
> and the split command ?

vacuum should clean out the dead tuples, then cluster on any large
tables that are bloated will sort them out without needing too much
temporary space.





pgsql-general by date:

Previous
From: Allan Kamau
Date:
Subject: Re: more than 2GB data string save
Next
From: Martijn van Oosterhout
Date:
Subject: Re: Memory Usage and OpenBSD