Thread: Re: [HACKERS] database replication

Re: [HACKERS] database replication

From
"Damond Walker"
Date:
>
>     I too have been thinking about this some over the last year or
>two, just trying to find a quick and easy way to do it. I am not so
>interested in replication, as in synchronization, as in between a desktop
>machine and a laptop, so I can keep the databases on each in sync with
>each other. For this sort of purpose, both the local and remote databases
>would be "idle" at the time of syncing.
>
   I don't think it would matter if the databases are idle or not to be
honest with you.  At any single point in time when you replicate I'd figure
that the database would be in a consistent state.  So, you should be able to
replicate (or sync) a remote database that is in use.  After all, you're
getting a snapshot of the database as it stands at 8:45 PM.  At 8:46 PM it
may be totally different...but the next time syncing takes place those
changes would appear in your local copy.
   The one problem you may run into is if the remote host is running a
large batch process.  It's very likely that you will get 50% of their
changes when you replicate...but then again, that's why you can schedule the
event to work around such things.

>     How about a single, seperate table with the fields of 'database',
>'tablename', 'oid', 'last_changed', that would store the same data as your
>PGR_TIME field. It would be seperated from the actually data tables, and
>therefore would be totally transparent to any database interface
>applications. The 'oid' field would hold each row's OID, a nice, unique
>identification number for the row, while the other fields would tell which
>table and database the oid is in. Then this table can be compared with the
>this table on a remote machine to quickly find updates and changes, then
>each differences can be dealt with in turn.
>
   The problem with OID's is that they are unique at the local level but if
you try and use them between servers you can run into overlap.  Also, if a
database is under heavy use this table could quickly become VERY large.  Add
indexes to this table to help performance and you're taking up even more
disk space.
   Using the PGR_TIME field with an index will allow us to find rows which
have changed VERY quickly.  All we need to do now is somehow programatically
find the primary key for a table so the person setting up replication (or
syncing) doesn't have to have an indepth knowledge of the schema in order to
setup a syncing schedule.

>
>     I like this idea, better than any I have come up with yet. Though,
>how are you going to handle DELETEs?
>
   Oops...how about defining a trigger for this?  With deletion I guess we
would have to move a flag into another table saying we deleted record 'X'
with this primary key from this table.

>
>     Yea, this is indeed the sticky part, and would indeed require some
>fine-tunning. Basically, the way I see it, is if the two timestamps for a
>single row do not match (or even if the row and therefore timestamp is
>missing on one side or the other altogether):
>     local ts > remote ts => Local row is exported to remote.
>     remote ts > local ts => Remote row is exported to local.
>     local ts > last sync time && no remote ts =>
>          Local row is inserted on remote.
>     local ts < last sync time && no remote ts =>
>          Local row is deleted.
>     remote ts > last sync time && no local ts =>
>          Remote row is inserted on local.
>     remote ts < last sync time && no local ts =>
>          Remote row is deleted.
>where the synchronization process is running on the local machine. By
>exported, I mean the local values are sent to the remote machine, and the
>row on that remote machine is updated to the local values. How does this
>sound?
>
   The replication part will be the most complex...that much is for
certain...
   I've been writing systems in Lotus Notes/Domino for the last year or so
and I've grown quite spoiled with what it can do in regards to replication.
It's not real-time but you have to gear your applications to this type of
thing (it's possible to create documents, fire off email to notify people of
changes and have the email arrive before the replicated documents do).
Replicating large Notes/Domino databases takes quite a while....I don't see
any kind of replication or syncing running in a blink of an eye.
   Having said that, a good algo will have to be written to cut down on
network traffic and to keep database conversations down to a minimum.  This
will be appreciated by people with low bandwidth connections I'm sure
(dial-ups, fractional T1's, etc).

>     Or run manually for my purposes. Also, maybe follow it
>with a vacuum run on both sides for all databases, as this is going to
>potenitally cause lots of table changes that could stand with a cleanup.
>
   What would a vacuum do to a system being used by many people?

>     No, not at all. Though it probably should be remaned from
>replication to synchronization. The former is usually associated with a
>continuous stream of updates between the local and remote databases, so
>they are almost always in sync, and have a queuing ability if their
>connection is loss for span of time as well. Very complex and difficult to
>implement, and would require hacking server code. :( Something only Sybase
>and Oracle have (as far as I know), and from what I have seen of Sybase's
>replication server support (dated by 5yrs) it was a pain to setup and get
>running correctly.
   It could probably be named either way...but the one thing I really don't
want to do is start hacking server code.  The PostgreSQL people have enough
to do without worrying about trying to meld anything I've done to their
server.   :)
   Besides, I like the idea of having it operate as a stand-alone product.
The only PostgreSQL feature we would require would be triggers and
plpgsql...what was the earliest version of PostgreSQL that supported
plpgsql?  Even then I don't see the triggers being that complex to boot.

>     I also like the idea of using Python. I have been using it
>recently for some database interfaces (to PostgreSQL of course :), and it
>is a very nice language to work with. Some worries about performance of
>the program though, as python is only an interpreted lanuage, and I have
>yet to really be impressed with the speed of execution of my database
>interfaces yet.
   The only thing we'd need for Python is the Python extensions for
PostgreSQL...which in turn requires libpq and that's about it.  So, it
should be able to run on any platform supported by Python and libpq.  Using
TK for the interface components will require NT people to get additional
software from the 'net.  At least it did with older version of Windows
Python.  Unix folks should be happy....assuming they have X running on the
machine doing the replication or syncing.  Even then I wrote a curses based
Python interface awhile back which allows buttons, progress bars, input
fields, etc (I called it tinter and it's available at
http://iximd.com/~dwalker).  It's a simple interface and could probably be
cleaned up a bit but it works.  :)

>     Anyway, it sound like a good project, and finally one where I
>actually have a clue of what is going on, and the skills to help. So, if
>you are interested in pursing this project, I would be more than glad to
>help. TTYL.
>

   That would be a Good Thing.  Have webspace somewhere?  If I can get
permission from the "powers that be" at the office I could host a website on
our (Domino) webserver.
               Damond



Re: [HACKERS] database replication

From
Ryan Kirkpatrick
Date:
On Sun, 26 Dec 1999, Damond Walker wrote:

> >     How about a single, seperate table with the fields of 'database',
> >'tablename', 'oid', 'last_changed', that would store the same data as your
> >PGR_TIME field. It would be seperated from the actually data tables, and
...
>     The problem with OID's is that they are unique at the local level but if
> you try and use them between servers you can run into overlap.  
Yea, forgot about that point, but became dead obvious once you
mentioned it. Boy, I feel stupid now. :)

>     Using the PGR_TIME field with an index will allow us to find rows which
> have changed VERY quickly.  All we need to do now is somehow programatically
> find the primary key for a table so the person setting up replication (or
> syncing) doesn't have to have an indepth knowledge of the schema in order to
> setup a syncing schedule.
Hmm... Yea, maybe look to see which field(s) has a primary, unique
index on it? Then use those field(s) as a primary key. Just require that
any table to be synchronized to have some set of fields that uniquely
identify each row. Either that, or add another field to each table with
our own, cross system consistent, identification system. Don't know which
would be more efficient and easier to work with.The former could potentially get sticky if it takes a lots of
fields to generate a unique key value, but has the smallest effect on the
table to be synced. The latter could be difficult to keep straight between
systems (local vs. remote), and would require a trigger on inserts to
generate a new, unique id number, that does not exist locally or
remotely (nasty issue there), but would remove the uniqueness
requirement.

>     Oops...how about defining a trigger for this?  With deletion I guess we
> would have to move a flag into another table saying we deleted record 'X'
> with this primary key from this table.
Or, according to my logic below, if a row is missing on one side
or the other, then just compare the remaining row's timestamp to the last
synchronization time (stored in a seperate table/db elsewhere). The
results of the comparsion and the state of row existences tell one if the
row was inserted or deleted since the last sync, and what should be done
to perform the sync.

> >     Yea, this is indeed the sticky part, and would indeed require some
> >fine-tunning. Basically, the way I see it, is if the two timestamps for a
> >single row do not match (or even if the row and therefore timestamp is
> >missing on one side or the other altogether):
> >     local ts > remote ts => Local row is exported to remote.
> >     remote ts > local ts => Remote row is exported to local.
> >     local ts > last sync time && no remote ts =>
> >          Local row is inserted on remote.
> >     local ts < last sync time && no remote ts =>
> >          Local row is deleted.
> >     remote ts > last sync time && no local ts =>
> >          Remote row is inserted on local.
> >     remote ts < last sync time && no local ts =>
> >          Remote row is deleted.
> >where the synchronization process is running on the local machine. By
> >exported, I mean the local values are sent to the remote machine, and the
> >row on that remote machine is updated to the local values. How does this
> >sound?

>     Having said that, a good algo will have to be written to cut down on
> network traffic and to keep database conversations down to a minimum.  This
> will be appreciated by people with low bandwidth connections I'm sure
> (dial-ups, fractional T1's, etc).
Of course! In reflection, the assigned identification number I
mentioned above might be the best then, instead of having to transfer the
entire set of key fields back and forth.

>     What would a vacuum do to a system being used by many people?
Probably lock them out of tables while they are vacuumed... Maybe
not really required in the end, possibly optional?

>     It could probably be named either way...but the one thing I really don't
> want to do is start hacking server code.  The PostgreSQL people have enough
> to do without worrying about trying to meld anything I've done to their
> server.   :)
Yea, they probably would appreciate that. They already have enough
on thier plate for 7.x as it is! :)

>     Besides, I like the idea of having it operate as a stand-alone product.
> The only PostgreSQL feature we would require would be triggers and
> plpgsql...what was the earliest version of PostgreSQL that supported
> plpgsql?  Even then I don't see the triggers being that complex to boot.
No, provided that we don't do the identification number idea
(which the more I think about it, probably will not work). As for what
version support plpgsql, I don't know, one of the more hard-core pgsql
hackers can probably tell us that.

>     The only thing we'd need for Python is the Python extensions for
> PostgreSQL...which in turn requires libpq and that's about it.  So, it
> should be able to run on any platform supported by Python and libpq.  
Of course. If it ran on NT as well as Linux/Unix, that would be
even better. :)

> Unix folks should be happy....assuming they have X running on the
> machine doing the replication or syncing.  Even then I wrote a curses
> based Python interface awhile back which allows buttons, progress
> bars, input fields, etc (I called it tinter and it's available at
> http://iximd.com/~dwalker).  It's a simple interface and could
> probably be cleaned up a bit but it works.  :)
Why would we want any type of GUI (X11 or curses) for this sync
program. I imagine just a command line program with a few options (local
machine, remote machine, db name, etc...), and nothing else.Though I will take a look at your curses interface, as I
havebeen
 
wanting to make a curses interface to a few db interfaces I have, in a
simple as manner as possible.

>     That would be a Good Thing.  Have webspace somewhere?  If I can get
> permission from the "powers that be" at the office I could host a website on
> our (Domino) webserver.
Yea, I got my own web server (www.rkirkpat.net) with 1GB+ of disk
space available, sitting on a decent speed DSL. Even can setup of a
virtual server if we want (i.e. pgsync.rkirkpat.net :). CVS repository,
email lists, etc... possible with some effort (and time). So, where should we start? TTYL.
PS. The current pages on my web site are very out of date at the
moment (save for the pgsql information). I hope to have updated ones up
within the week. 

---------------------------------------------------------------------------
|   "For to me to live is Christ, and to die is gain."                    |
|                                            --- Philippians 1:21 (KJV)   |
---------------------------------------------------------------------------
|   Ryan Kirkpatrick  |  Boulder, Colorado  |  http://www.rkirkpat.net/   |
---------------------------------------------------------------------------



Re: [HACKERS] database replication

From
Hannu Krosing
Date:
Ryan Kirkpatrick wrote:
> 
> On Sun, 26 Dec 1999, Damond Walker wrote:
> 
> > >     How about a single, seperate table with the fields of 'database',
> > >'tablename', 'oid', 'last_changed', that would store the same data as your
> > >PGR_TIME field. It would be seperated from the actually data tables, and
> ...
> >     The problem with OID's is that they are unique at the local level but if
> > you try and use them between servers you can run into overlap.

The same is unfortunately true of any local primary key in a replicated
system.

Mariposa solved this by making the oid 8-byte, of which first four are the
site id 
and remaining four are the oid as we have it now.

This has the added benefit of being able to determine which site created the
record.

> >     Oops...how about defining a trigger for this?  With deletion I guess we
> > would have to move a flag into another table saying we deleted record 'X'
> > with this primary key from this table.
> 
>         Or, according to my logic below, if a row is missing on one side
> or the other, then just compare the remaining row's timestamp to the last
> synchronization time (stored in a seperate table/db elsewhere). The
> results of the comparsion and the state of row existences tell one if the
> row was inserted or deleted since the last sync, and what should be done
> to perform the sync.

It's very difficult to find a _missing_ row quickly. It will allways be
somewhat 
expensive.

Perhaps the easiest way would be to re-introduce time-travel. then a deleted
row
would just be an ordinary row with its valid_to timestamp set to past.

probably a set of (valid_from,valid_to,site_id,local_row_id) would be
sufficient to 
pinpoint the record both in time and space.

Being able to do it would require some improvement in postgres inheritance.
At least rules, triggers, indexes, constraints and defaults should be
inheriteable.

>         No, provided that we don't do the identification number idea
> (which the more I think about it, probably will not work). As for what
> version support plpgsql, I don't know, one of the more hard-core pgsql
> hackers can probably tell us that.

Ask Jan Wiek, he did it :)

> >     The only thing we'd need for Python is the Python extensions for
> > PostgreSQL...which in turn requires libpq and that's about it.  So, it
> > should be able to run on any platform supported by Python and libpq.
> 
>         Of course. If it ran on NT as well as Linux/Unix, that would be
> even better. :)

NT kas both Python and libpq but unfortunately no PyGreSQL. If someone takes 
the time to make it compile it would be appreciated :)

If you feel you like pure python protocol hacking, I can dig up my python 
module that was able to do simple queries for version 6.2
------------------
Hannu