Michael A Nachbaur <mike@nachbaur.com> writes:
> On Wednesday 09 July 2003 02:28 pm, Cliff Wells wrote:
>> On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote:
>> > So, I'm looking at syncronizing 4 tables from one master database to
>> > several child databases. I'm thinking of doing the following with
>> > DBD::Multiplex:
>> >
>> > DELETE FROM TableA;
>> > INSERT INTO TableA (..) VALUES (...);
>> > ....
>> >
>> > on all the child databases, but I'm not sure what kind of impact
>> > this would have on my servers. My impression is that this would
>> > hammer the indexes, and might blow any memory optimization out
>> > the window. Only a few records in my dataset will change from
>> > time-to-time, but just the process of determining what is
>> > different may take more effort than simply rebuilding.
>>
>> Keep a timestamp associated with each record. Only update the records
>> with timestamps later than your last sync.
>
> I'm dealing with an existing database structure that, though I can
> change it, has a lot of impact on the rest of my infrastructure. If
> I can find a way of doing this without resorting to timestamps, I'd
> much rather do it that way.
Would it be possible to add another table (changelog) that contained
the primary key of the record that has changed and a timestamp? The
changelog table could be maintained via triggers (on update, insert,
or delete). Your synchronization software could then be relatively
simple. It would simply need to check your changelog table for rows
that have changed.
Jason