Problem w/ dumping huge table and no disk space - Mailing list pgsql-general

Help if you would please :)

I have a 10million+ row table and I've only got a couple hundred megs
left.  I can't delete any rows, pg runs out of disk space and crashes.
 I can't pg_dump w/ compressed, the output file is started, has the
schema and a bit other info comprising about 650 bytes, runs for 30
minutes and pg runs out of disk space and crashes.  My pg_dump cmd is:
"pg_dump -d -f syslog.tar.gz -F c -t syslog -Z 9 syslog".

I want to dump this database (entire pgsql dir is just over two gigs)
and put it on another larger machine.

I can't afford to lose this information, are there any helpful hints?

I'll be happy to provide more information if desired.

David



pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: moving char() to varchar()
Next
From: Alvaro Herrera
Date:
Subject: Re: What Is The Firing Order?