Re: Re: Failed Statements within Transactions - Mailing list pgsql-general
From | Lincoln Yeoh |
---|---|
Subject | Re: Re: Failed Statements within Transactions |
Date | |
Msg-id | 3.0.5.32.20001230160728.008a6cd0@192.228.128.13 Whole thread Raw |
In response to | Re: Re: Failed Statements within Transactions (Mike Mascari <mascarm@mascari.com>) |
List | pgsql-general |
At 11:14 PM 12/29/00 -0500, Mike Mascari wrote: >INSERT INTO foo SELECT 1, 'Hello' WHERE NOT EXISTS (SELECT >key FROM foo WHERE key = 1); > >The insert will then insert either (1) a single row if the >row did not already exist, or (2) no rows if the row already >existed. If the latter, you may then update the row using an Will that really work? I've tried something like that before and it didn't. This is because the stuff inserted may not be seen by selects of other transactions until the transaction has been committed (depends on your transaction isolation level, but the default behaves that way, and in this case the default is recommended!). Try starting up two psqls, do a begin in each, then do your query in both transactions, then only do a commit in both. I strongly suspect you'll see duplicate records. Anyone relying on this method should change it "Real Soon Now". The options I see are 1) Use a UNIQUE to constrain stuff. I don't like this as it means rolling back stuff, and in some cases you may have to redo almost everything just to do an update instead, so you may have to store the data elsewhere first, instead of stuffing it into the database as you go. Also I prefer database errors to be exceptions, not something that occurs normally. Maybe I'm strange but I dislike the idea of _using_ errors. Seems rather kludgy, and hints that there is a design issue somewhere. Dealing with errors is of course ok. 2) Lock the table, do the select, then insert, then unlock table (in Postgresql that means rollback/commit, can't unlock earlier). 3) Use a different locking mechanism. I've suggested adding a lock on an arbitrary string feature to postgresql much like the one in MySQL (get lock). So what you would be able to do is begin transaction do lots of stuff lock string "table=foo,key=1" select count(*) from foo where key=1 if count=0 insert new row (key=1) link row to other tables update the other tables else update row where key=1. update the other tables. fi commit (and release lock) This method will not block everybody, unlike locking the entire table. It just blocks those about to do the same thing. Other people can insert,foo,key=2 and it doesn't matter, full speed ahead ;). When you port your application to less capable databases, you lock the entire table instead, no need for a huge rewrite. Trouble is, I seem to be the only one interested in this feature. I'm not confident that my coding skills are up to scratch for adding this feature in myself. I'd personally try to copy and adapt the lock table mechanism, but I haven't even looked at it yet to see whether that is crazy or viable :). It involves touching some core stuff. Still, it may be easier to do this than adding transaction savepoints. What do the Postgresql developers think? Don't you think it'll be cool to be able to: lock string "Working deep magic type 2 on e-commerce tables" ;). Hmm, hierachical locks would be interesting but rather academic. I'll be happy with just lock string :). Cheerio, Link. So much procastination left to do ;).
pgsql-general by date: