Thread: Re: [GENERAL] Database synchronisation over the internet...

Re: [GENERAL] Database synchronisation over the internet...

From
Wim Aarts
Date:
>> I am also working on the self same thing. I am using ssh to authenticate
>> and encrpyt the connection. The master database uses pg_dump, since the
>> data is not too large, to create a file which is securely copied via scp
>> to the slave host. An ssh connection then logs into the slave,drops the
>> table and loads the data from the dump file which has been copied over.
>>
>> seems to work so far, but there must be more refinements I can add.

>
>use ssh to create a tunnel between server A and server B, then, from
>Server A, issue somethig like:
>
>psql -h ServerB -p 5432 -c "drop table <tablename>";
>pg_dump tablename | psql -h ServerB -p 5432 ...

So encryption will be with ssh. Thanks.

Then, considering both options, my question would be what happens if the
internet connection failed for some period of time? Will the second command
keep visitors waiting?
Are there any different kind of solutions? Where only changed data is synced?
Is such a solution likely to be slower than using dumps?

Cheers Wim.


Re: [GENERAL] Database synchronisation over the internet...

From
The Hermit Hacker
Date:
On Fri, 14 Jan 2000, Wim Aarts wrote:

>
> >> I am also working on the self same thing. I am using ssh to authenticate
> >> and encrpyt the connection. The master database uses pg_dump, since the
> >> data is not too large, to create a file which is securely copied via scp
> >> to the slave host. An ssh connection then logs into the slave,drops the
> >> table and loads the data from the dump file which has been copied over.
> >>
> >> seems to work so far, but there must be more refinements I can add.
>
> >
> >use ssh to create a tunnel between server A and server B, then, from
> >Server A, issue somethig like:
> >
> >psql -h ServerB -p 5432 -c "drop table <tablename>";
> >pg_dump tablename | psql -h ServerB -p 5432 ...
>
> So encryption will be with ssh. Thanks.
>
> Then, considering both options, my question would be what happens if the
> internet connection failed for some period of time? Will the second command
> keep visitors waiting?

That's the big risk on this one...you need to be very confident in your
connections, unfortunately ...

> Are there any different kind of solutions? Where only changed data is
> synced?  Is such a solution likely to be slower than using dumps?

Currently, none that I'm aware of...if you are into programming, you could
probably write a quick-n-dirty program that connected to both databases at
the same time and assuming that you have some sort of unique key on the
tables in question, compared the tuple on the master server with what the
slave server has at this time, and if different, updates the slave with
the info from the master ...

There have been several ppl talking about replication issues...best place
to look is the archives and search for replication ...


Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org