Thread: Is DBLINK transactional
Hi,
I am using dblink to read data from a remote data base, insert these data in the local database, update the red data in the remote database then continue to do some other work on the local database in the same transaction.
My question is : Is db link transactional; If the local transaction failed, would the update in the remote data base roll back or if the update in the remote data base failed, would the insert in the local data base roll back.
If not, is there a way to make db link “transactional”?
Thanks
On Fri, Mar 12, 2010 at 10:27 AM, elias ghanem <e.ghanem@acteos.com> wrote: > Hi, > > I am using dblink to read data from a remote data base, insert these data in > the local database, update the red data in the remote database then continue > to do some other work on the local database in the same transaction. > > My question is : Is db link transactional; If the local transaction failed, > would the update in the remote data base roll back or if the update in the > remote data base failed, would the insert in the local data base roll back. > > If not, is there a way to make db link “transactional”? of course. You can always explicitly open a transaction on the remote side over dblink, do work, and commit it at the last possible moment. Your transactions aren't perfectly synchronized...if you crash in the precise moment between committing the remote and the local you can get in trouble. The chances of this are extremely remote though. merlin
On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote: > of course. You can always explicitly open a transaction on the remote > side over dblink, do work, and commit it at the last possible moment. > Your transactions aren't perfectly synchronized...if you crash in the > precise moment between committing the remote and the local you can get > in trouble. The chances of this are extremely remote though. If you want a better guarantee than that, consider using 2PC. The problem with things that are "extremely remote" possibilities are that they tend to be less remote than we expect ;) Regards, Jeff Davis
On 13/03/2010 5:54 AM, Jeff Davis wrote: > On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote: >> of course. You can always explicitly open a transaction on the remote >> side over dblink, do work, and commit it at the last possible moment. >> Your transactions aren't perfectly synchronized...if you crash in the >> precise moment between committing the remote and the local you can get >> in trouble. The chances of this are extremely remote though. > > If you want a better guarantee than that, consider using 2PC. Translation in case you don't know: 2PC = two phase commit. Note that you have to monitor "lost" transactions that were prepared for commit then abandoned by the controlling app and periodically get rid of them or you'll start having issues. > The problem with things that are "extremely remote" possibilities are > that they tend to be less remote than we expect ;) ... and they know just when they can happen despite all the odds to maximise the pain and chaos caused. -- Craig Ringer
On Sat, 2010-03-13 at 20:10 +0800, Craig Ringer wrote: > On 13/03/2010 5:54 AM, Jeff Davis wrote: > > On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote: > >> of course. You can always explicitly open a transaction on the remote > >> side over dblink, do work, and commit it at the last possible moment. > >> Your transactions aren't perfectly synchronized...if you crash in the > >> precise moment between committing the remote and the local you can get > >> in trouble. The chances of this are extremely remote though. > > > > If you want a better guarantee than that, consider using 2PC. > > Translation in case you don't know: 2PC = two phase commit. > > Note that you have to monitor "lost" transactions that were prepared for > commit then abandoned by the controlling app and periodically get rid of > them or you'll start having issues. And you still have the problem of committing one 2PC transaction and then crashing before committing the other and then crashing the transaction monitor before being able to record what crashed :P, though this possibility is even more remote than just crashing between the 2 original commits (dblink and local). To get around this fundamental problem, you can actually do async queues and remember, what got replayed on the remote side, so if you have crashes on either side, you can simply replay again. > > The problem with things that are "extremely remote" possibilities are > > that they tend to be less remote than we expect ;) > > ... and they know just when they can happen despite all the odds to > maximise the pain and chaos caused. > > -- > Craig Ringer > -- Hannu Krosing http://www.2ndQuadrant.com PostgreSQL Scalability and Availability Services, Consulting and Training
craig@postnewspapers.com.au (Craig Ringer) writes: > On 13/03/2010 5:54 AM, Jeff Davis wrote: >> On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote: >>> of course. You can always explicitly open a transaction on the remote >>> side over dblink, do work, and commit it at the last possible moment. >>> Your transactions aren't perfectly synchronized...if you crash in the >>> precise moment between committing the remote and the local you can get >>> in trouble. The chances of this are extremely remote though. >> >> If you want a better guarantee than that, consider using 2PC. > > Translation in case you don't know: 2PC = two phase commit. > > Note that you have to monitor "lost" transactions that were prepared > for commit then abandoned by the controlling app and periodically get > rid of them or you'll start having issues. There can be issues even if they're not abandoned... Note that prepared transactions establish, and maintain, until removed, all the appropriate locks on the underlying tables and tuples. As a consequence, maintenance-related activities may be somewhat surprisingly affected. foo=# begin; set transaction isolation level serializable; BEGIN SET foo=# insert into my_table (date_time, hostname, duration, diag) values (now(), 'foo', 1, 2); INSERT 0 1 foo=# prepare transaction 'foo'; PREPARE TRANSACTION [then, I quit the psql session...] foo=# select * from pg_locks where relation = (select oid from pg_class where relname = 'my_table'); -[ RECORD 1 ]------+----------------- locktype | relation database | 308021 relation | 308380 page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | -1/433653 pid | mode | RowExclusiveLock granted | t If I try to truncate the table... foo=# truncate my_table; [hangs, waiting on the lock...] [looking at another session...] foo=# select * from pg_locks where relation = (select oid from pg_class where relname = 'my_table'); -[ RECORD 1 ]------+-------------------- locktype | relation database | 308021 relation | 308380 page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | -1/433653 pid | mode | RowExclusiveLock granted | t -[ RECORD 2 ]------+-------------------- locktype | relation database | 308021 relation | 308380 page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | 2/13 pid | 3749 mode | AccessExclusiveLock granted | f Immediately upon submitting "commit prepared 'foo';", both locks are resolved quite quickly. >> The problem with things that are "extremely remote" possibilities are >> that they tend to be less remote than we expect ;) > > ... and they know just when they can happen despite all the odds to > maximise the pain and chaos caused. A lot of these kinds of things only come up as race conditions. The trouble is that a lot of races do wind up synchronizing themselves. In sporting events, this is intended and desired; an official fires the starter pistol or activates the horn, or what have you, with the intended result that athletes begin very nearly simultaneously. And at the end of Olympic races, their times frequently differ only by miniscule intervals. In my example up above, there's a possibly unexpected synchronization point; the interweaving of the PREPARE TRANSACTION and TRUNCATE requests lead to a complete lock against the table. Supposing 15 processes then try accessing that table, they'll be blocked until the existing locks get closed out. Which takes place the very instant after the COMMIT PREPARED request comes in. At that moment, 15 "racers" are released very nearly simultaneously. If there is any further mischief to be had in the race, well, they're set up to tickle it... -- let name="cbbrowne" and tld="gmail.com" in name ^ "@" ^ tld;; http://linuxdatabases.info/info/nonrdbms.html "Barf, what is all this prissy pedantry? Groups, modules, rings, ufds, patent-office algebra. Barf!" -- R. William Gosper