On 5/10/2010 2:46 PM, Kynn Jones wrote:
> On Mon, May 10, 2010 at 1:33 PM, Greg Sabino Mullane <greg@turnstep.com
> <mailto:greg@turnstep.com>> wrote:
>
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
>
>
> > I would like to replicate the following Unix pipe within a Perl
> script,
> > perhaps using DBD::Pg:
> >
> >
> > % pg_dump -Z9 -Fc -U <DB_USER> <FROM_DB> | pg_restore -v -d
> <TO_DB> -p
> > <SSH_TUNNEL_PORT> -h localhost -U <DB_USER>
> >
> > Of course, I can try to use Perl's system, and the like, to run
> this pipe
> > verbatim, but I this as a last-resort approach.
> >
> > Is there a more direct way?
>
> ...
>
> If you simply want to avoid the pipes, you can think about calling
> pg_dump
> from the remote box, using a authorized_keys with a specific command
> in it,
> and other tricks...
>
>
>
>
> I can work with pg_dump, I think. What I'm trying to avoid is the
> SSH-tunneling, which I find too fragile for reliable automated operation.
>
> My script can use DBI::connect to provide a password when connecting to
> the remote host, so I can run regular SQL on the remote host via Perl
> DBI without SSH-tunneling.
>
> But I have not found a way for my script to provide a password when it
> runs commands like dropdb, createdb, and pg_restore with the "-h <REMOTE
> HOST>" flag. So I end up resorting to SSH-tunneling. This is what I'm
> trying to avoid.
>
> Your idea of having the remote host run the pg_dump is worth looking
> into, although I'm reluctant because involving the remote host like this
> would significantly complicate my whole set up.
>
> Anyway, thanks!
>
> ~K
>
>
Ah, this one I have hit too. I have very large database updates to send
to the web boxes... and I'd sometimes loose connection mid way.
I changed the process to dump to file, then rsync the file to the dest,
then remote exec the restore via ssh.
-Andy