david@blue-labs.org (David Ford) wrote in message news:<3B993392.1000809@blue-labs.org>...
> Help if you would please :)
>
> I have a 10million+ row table and I've only got a couple hundred megs
> left. I can't delete any rows, pg runs out of disk space and crashes.
> I can't pg_dump w/ compressed, the output file is started, has the
> schema and a bit other info comprising about 650 bytes, runs for 30
> minutes and pg runs out of disk space and crashes. My pg_dump cmd is:
> "pg_dump -d -f syslog.tar.gz -F c -t syslog -Z 9 syslog".
>
> I want to dump this database (entire pgsql dir is just over two gigs)
> and put it on another larger machine.
>
> I can't afford to lose this information, are there any helpful hints?
Do you have ssh available on your computer? Is an sshd daemon running
on the other computer?
Then try this:
pg_dump mydatabase|ssh othersystem.com dd of=/home/me/database.dump
The output of pg_dump on your computer will end up on the other
computer in /home/me/database.dump.
You could even do:
pg_dump mydatabase|gzip -c|ssh othersystem.com 'gunzip -c |psql
mydatabase'
This runs the database dump through gzip, pipes it to ssh - which
pipes it through gunzip, then psql. Obviously, you'll need to
"createdb mydatabase" on "othersystem.com" before running the above
line.
I tried this just now, and it works beautifully.
If you're doing it across a LAN, you can dispense with the gzip/gunzip
bit - you'll lose more bandwidth to CPU usage then you'll gain from
the compression (use compression when bandwidth is really limited).
Calvin
p.s. this can also be done with rsh (remote shell) and the
corresponding rsh server if you don't have ssh - but you really
_should_ be using ssh.