Thread: Asychronous database replication
I have a project on my plate which will involve potentially hundreds of PG8 databases in the field which will need to synchronize data with a central database. The company is a secular nonprofit which delivers medical services to underprivileged kids as well as to disaster victims like those hit by Katrina. We have six mobile medical units there now as a matter of fact. Some of these databases will have 24/7 net connections; some may not even have telephone access for days so traditional database replication techniques won't work. I've not found any third-party software yet which could help us here so I'm proceeding on the assumption that we're going to need to build it ourselves. This sort of database topography is virgin ground for me but I'm guessing that others here have encountered this challenge before and will have some tips/advice/war stories to steer us in the right direction.
smanes@magpie.com (Steve Manes) writes: > I have a project on my plate which will involve potentially hundreds > of PG8 databases in the field which will need to synchronize data with > a central database. The company is a secular nonprofit which delivers > medical services to underprivileged kids as well as to disaster > victims like those hit by Katrina. We have six mobile medical units > there now as a matter of fact. > > Some of these databases will have 24/7 net connections; some may not > even have telephone access for days so traditional database > replication techniques won't work. I've not found any third-party > software yet which could help us here so I'm proceeding on the > assumption that we're going to need to build it ourselves. > > This sort of database topography is virgin ground for me but I'm > guessing that others here have encountered this challenge before and > will have some tips/advice/war stories to steer us in the right > direction. Well, what you clearly want/need is asynchronous multimaster... I'm involved with Slony-I, which is asynchronous but definitely, consciously, intentionally NOT multimaster. It seems to me that you might be able to usefully cannibalize components from Slony-I; the trigger functions that it uses to intercept updates seem likely to be useful. Some of the data structures would be useful, notably "sl_log_1", which is where the updates are collected. There are some conspicuous "troublesome bits" which Slony-I has evaded since it is NOT multimaster. For instance, you'll need some form of conflict resolution system, as async multimaster allows inserting conflicting combinations of updates. You may need some special way of detecting updates to "balance tables," that is, things where people typically updates of the form: update balance_table set balance = balance + 10; In Slony-I, that becomes read, by the trigger, as, let's say... update balance_table set balance = 450; (as the old value was 440, and 440+10 = 450) I have been led to believe that Sybase has a sort of "delta update" for this sort of thing... It's worth your while to look into whatever you can find on how other async multimaster systems function. Two conspicuous (tho perhaps unexpected) examples include: a) Palm Computing's PalmSync system - which addresses conflicts by creating duplicate records and saying "You fix that..." b) Lotus Notes, which does a somewhat document-oriented sort of async MM replication. There's _some_ collected wisdom around; if you visit the Slony-I list, you might be able to attract some commentary. Just be aware that we're not planning to make it a multimaster system :-). -- output = reverse("gro.gultn" "@" "enworbbc") http://cbbrowne.com/info/slony.html TTY Message from The-XGP at MIT-AI: The-XGP@AI 02/59/69 02:59:69 Your XGP output is startling.
Chris Browne <cbbrowne@acm.org> writes: > Well, what you clearly want/need is asynchronous multimaster... I didn't catch anything in his description that answered whether he needs multimaster or a simple single master with many slaves model would suffice. > I'm involved with Slony-I, which is asynchronous but definitely, > consciously, intentionally NOT multimaster. > > It seems to me that you might be able to usefully cannibalize > components from Slony-I; the trigger functions that it uses to > intercept updates seem likely to be useful. Some of the data > structures would be useful, notably "sl_log_1", which is where the > updates are collected. A general purpose replication system is a lot trickier and more technical that building an application-specific system. While all of the above is cool, if you're able to design the application around certain design constraints you can probably build something much simpler. My first reaction to this description was to consider some sort of model where the master database publishes text dumps of the master database which are regularly downloaded and loaded on the slaves. The slaves treat those tables as purely read-only reference tables. If you need data to propagate from the clients back to the server then things get more complicated. Even then you could side step a lot of headaches if you can structure the application in specific ways, such as guaranteeing that the clients can only insert, never update records. -- greg
On Sep 15, 2005, at 9:54 PM, Greg Stark wrote: > If you need data to propagate from the clients back to the server > then things > get more complicated. Even then you could side step a lot of > headaches if you > can structure the application in specific ways, such as > guaranteeing that the > clients can only insert, never update records. And even updates could be OK if the application can support the right partitioning of the data and only do it one place at a time. With some kinds of field based work it might be suitable to have global (read only) data along with data created in the field that is site/ client specific. As long as the data collected in the field is not being updated on the master, it could continue to be updated in the field and synced back to the master database. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL
Greg Stark wrote: > My first reaction to this description was to consider some sort of model where > the master database publishes text dumps of the master database which are > regularly downloaded and loaded on the slaves. The slaves treat those tables > as purely read-only reference tables. > > If you need data to propagate from the clients back to the server then things > get more complicated. Even then you could side step a lot of headaches if you > can structure the application in specific ways, such as guaranteeing that the > clients can only insert, never update records. It's the latter, I'm afraid. The master actually won't be modifying or inserting any data itself, just publishing it for the client databases in its domain. Almost all data inserts/updates/deletes will occur on the leaf nodes, i.e. at the remote health clinics and MMUs (mobile medical units). What we need to ensure is that if Patient X visits Site A on Monday that his records are there for a followup visit at Site B on Tuesday. Even this has salient problems: for instance, Patient X visits Site B before Site A has had time to replicate its current data back to the master and Site B has pulled those updates. The requirements scream ASP model except that this system needs to be functional for disaster management where it's likely there won't be any communications. At least, that's the constraint I've been given. This may turn out to be an issue of managing client expectations and some add'l infrastructure investment (i.e. better satellite communications on the MMUs and satellite backup for the fixed clinics). We're at the very early head-banging stages of this project now so I have a fairly optimistic list of requirements to resolve. This is an open source project though so it would be terrific if we could build it non-ASP.
Steve Manes wrote: > Greg Stark wrote: > >> My first reaction to this description was to consider some sort of >> model where >> the master database publishes text dumps of the master database which are >> regularly downloaded and loaded on the slaves. The slaves treat those >> tables >> as purely read-only reference tables. >> If you need data to propagate from the clients back to the server then >> things >> get more complicated. Even then you could side step a lot of headaches >> if you >> can structure the application in specific ways, such as guaranteeing >> that the >> clients can only insert, never update records. > > > It's the latter, I'm afraid. The master actually won't be modifying or > inserting any data itself, just publishing it for the client databases > in its domain. Almost all data inserts/updates/deletes will occur on > the leaf nodes, i.e. at the remote health clinics and MMUs (mobile > medical units). What we need to ensure is that if Patient X visits Site > A on Monday that his records are there for a followup visit at Site B on > Tuesday. > > Even this has salient problems: for instance, Patient X visits Site B > before Site A has had time to replicate its current data back to the > master and Site B has pulled those updates. What about doing updates in a peer-to-peer style? Basically, each node updates any others it comes in contact with (both with its local changes and anything it's received from the master) and everyone pushes changes back to the master when they can. Sort of the way airplanes crossing the ocean pass radio messages for each other. I'm assuming two things: 1) Communication b/w local nodes is easier / occurs more frequently than communicating with the master. It's easier for an MMU to make a local call or visit a clinic than dial the sat phone. 2) Patients travel locally. Patient X might visit Sites A and B a day apart, but he's unlikely to visit Site C which is a few countries away any time soon. Basically, I don't think you need to update all nodes with every record immediately. For some early-morning reason, this made me think of distributed version control, but I'm not entirely sure how one would use it in this case. See svk.elixus.org. -- Peter Fein pfein@pobox.com 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman
> The requirements scream ASP model except that this system needs to be > functional for disaster management where it's likely there won't be any > communications. At least, that's the constraint I've been given. I'm not an expert on this, but just kicking around the idea, the approach I think I'd look into: - clients don't access the database directly - there's a middleware layer and clients make higher-level RPC-type calls whose semantics more closely match the client functionality - then those calls can be logged and replicated... -- Scott Ribe scott_ribe@killerbytes.com http://www.killerbytes.com/ (303) 665-7007 voice
John DeSoi <desoi@pgedit.com> writes: > > If you need data to propagate from the clients back to the server then things > > get more complicated. Even then you could side step a lot of headaches if you > > can structure the application in specific ways, such as guaranteeing that the > > clients can only insert, never update records. > > And even updates could be OK if the application can support the right > partitioning of the data and only do it one place at a time. With some kinds > of field based work it might be suitable to have global (read only) data along > with data created in the field that is site/ client specific. As long as the > data collected in the field is not being updated on the master, it could > continue to be updated in the field and synced back to the master database. Sure, though then you have to deal with what data to display on the client end. The most recently downloaded master data or the locally updated data? What about after you upload your local data when you're not sure whether the master data has been reconciled? Not impossible but it would be more work. But I find a surprisingly high fraction of applications are very amenable to being handled as insert-only. A medical application strikes me as something someone is all the more likely to be happy with an insert-only model. So instead of allowing having remote users to modify data directly you only allow them to "request" an update. Then when they look at the record it still makes logical sense to see the old data, along with their "requested" updates. Essentially, any replication system is based on insert-only queues. If you can design the application around that you avoid having to implement some sort of mapping to some hiding that. -- greg
On Friday 16 September 2005 07:28 am, John DeSoi wrote: > On Sep 15, 2005, at 9:54 PM, Greg Stark wrote: > > If you need data to propagate from the clients back to the server > > then things > > get more complicated. Even then you could side step a lot of > > headaches if you > > can structure the application in specific ways, such as > > guaranteeing that the > > clients can only insert, never update records. > > And even updates could be OK if the application can support the right > partitioning of the data and only do it one place at a time. With > some kinds of field based work it might be suitable to have global > (read only) data along with data created in the field that is site/ > client specific. As long as the data collected in the field is not > being updated on the master, it could continue to be updated in the > field and synced back to the master database. > > > > John DeSoi, Ph.D. > http://pgedit.com/ > Power Tools for PostgreSQL > > Hi, Maybe one thing that could be used is use a trigger to date whenever a record is inserted/updated on the client computers, and whenever they get back to base to run a script and update the master according to any change since the last time (according to the date). Regards Neil
> But I find a surprisingly high fraction of applications are very amenable to > being handled as insert-only. A medical application strikes me as something > someone is all the more likely to be happy with an insert-only model. Yes, I work in the medical field, and use my own home-grown (predates Slony) replication system to maintain an offsite near-real-time backup server. The fact that most of my data is insert only greatly simplifies things. -- Scott Ribe scott_ribe@killerbytes.com http://www.killerbytes.com/ (303) 665-7007 voice