Re: Problem w/ dumping huge table and no disk space - Mailing list pgsql-general

From Alvaro Herrera
Subject Re: Problem w/ dumping huge table and no disk space
Date
Msg-id Pine.LNX.4.33L2.0109071737460.5974-100000@aguila.protecne.cl
Whole thread Raw
In response to Problem w/ dumping huge table and no disk space  (David Ford <david@blue-labs.org>)
List pgsql-general
On Fri, 7 Sep 2001, David Ford wrote:

> Help if you would please :)
>
> I have a 10million+ row table and I've only got a couple hundred megs
> left.  I can't delete any rows, pg runs out of disk space and crashes.
>  I can't pg_dump w/ compressed, the output file is started, has the
> schema and a bit other info comprising about 650 bytes, runs for 30
> minutes and pg runs out of disk space and crashes.  My pg_dump cmd is:
> "pg_dump -d -f syslog.tar.gz -F c -t syslog -Z 9 syslog".

Try putting the output into ssh or something similar. You don't have to
keep it on the local machine.

From the bigger machine, something like

ssh server-with-data "pg_dump <options>" > syslog-dump

or from the smaller machine,
pg_dump <options> | ssh big-machine "cat > syslog-dump"

should do the trick. Maybe you can even pipe the output directly into
psql or pg_restore. Make sure the pg_dump throws output to stdout.

HTH.

--
Alvaro Herrera (<alvherre[@]atentus.com>)


pgsql-general by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: moving char() to varchar()
Next
From: Tom Lane
Date:
Subject: Re: Problem w/ dumping huge table and no disk space