Re: WAS: [Fwd: PostgreSQL new commands proposal] - Mailing list pgsql-hackers
From | Stephan Szabo |
---|---|
Subject | Re: WAS: [Fwd: PostgreSQL new commands proposal] |
Date | |
Msg-id | 20011201152611.Q73200-100000@megazone23.bigpanda.com Whole thread Raw |
In response to | Re: WAS: [Fwd: PostgreSQL new commands proposal] (Sergio Pili <sergiop@sinectis.com.ar>) |
List | pgsql-hackers |
On Sat, 1 Dec 2001, Sergio Pili wrote: > [documents snipped] Thanks. > > The delete/update things is: > > transaction 1 starts > > transaction 2 starts > > transaction 1 deletes a row from A > > -- There are no rows in B that can be seen by > > -- this transaction so you don't get any deletes. > > transaction 2 updates a row in B > > -- The row in A can still be seen since it > > -- hasn't expired for transaction 2 > > transaction 1 commits > > transaction 2 commits > > I understand. This happens because with the MVCC, the writings don't > lock the readings... > I don't like a lot this but the MVCC works this way. You can get this by doing row level locks with for update or table locks, but you have to be careful to make sure to do it and AFAIK for update doesn't work in subselects and table locks are much much too strong (for update is too strong as well, but it's less too strong - see arguments about the fk locking ;) ) > > The trigger thing is (I'm not 100% sure, but pretty sure this > > is what'll happen - given that a test rule with a > > function that prints a debugging statement gave me the > > originally specified value not the final value) > > transaction 1 starts > > you say update A key to 2,2 > > - does cascade update of B as rule expansion to 2,2 > > - before trigger on A sets NEW.key to 3,3 > > - the row in A actually becomes 3,3 > > You'd no longer be checking the validity of the value > > of B and so you'd have a broken constraint. > > > > If this is true, does mean that the rules can be avoided > using before triggers? > Are not the commands executed in the triggers passed through the > re-writing system? Before triggers have the option of actually changing the *actual* tuple to insert/update as I understand it. It's not that the before trigger runs an update (which wouldn't work because the row isn't there) but that the before trigger can change the row being inserted (for example to add a timestamp) or negate the insert/deletion/update entirely (returning NULL) which would mean that you'd have rule things going off when the original operation was canceled by trigger I believe. > > All in all I think you'd be better off with triggers than rules, but I > > understand what you're trying to accomplish. > > We fully agree with you in the sense that our examples and inclusion > dependencies may be totally handled using triggers. In fact, we have > done this many times in several cases. The question here is not, for > example, �how to preserve an inclusion dependency� but �which is the > better way to preserve inclusion dependencies�. > We are so insistent on this matter because the level of abstraction (and > generality) of rules is higher than the triggers and thus it becomes > easier to express a real world problem in a rule than in a trigger. > PostgreSQL rules can "almost" be used for this sort of problems (we do > not bother you with the whole set of features that this approach will > allow). > In this way, for just a minimum price, we may buy a new wide set of > capabilities. We ensure you that this is a very good deal. If you want > to discuss which are those new capabilities, we can send you a large > more explicative document on the subject. Well, I'm not particularly the person you need to convince, since I don't have a strong view on the functionality/patch in question :), I was just pointing out that the example given wasn't likely to convince someone.
pgsql-hackers by date: