On 2010-07-05 12:11, Pierre C wrote:<br /><span style="white-space: pre;">> <br /> >> The problem can
generallybe written as "tuples seeing multiple <br /> >> updates in the same transaction"?<br /> >> <br />
>>I think that every time PostgreSQL is used with an ORM, there is a<br /> >> certain amount of multiple
updatestaking place. I have actually <br /> >> been reworking clientside to get around multiple updates, since<br
/>>> they popped up in one of my profiling runs. Allthough the time I<br /> >> optimized away ended being
both"roundtrip time" + "update time",<br /> >> but having the database do half of it transparently, might have<br
/>>> been sufficient to get me to have had a bigger problem elsewhere..<br /> >> <br /> >> To sum up.
YesI think indeed it is a real-world case.<br /> >> <br /> >> Jesper<br /> > <br /> > On the Python
side,elixir and sqlalchemy have an excellent way of<br /> > handling this, basically when you start a transaction,
allchanges<br /> > are accumulated in a "session" object and only flushed to the<br /> > database on session
commit(which is also generally the transaction<br /> > commit). This has multiple advantages, for instance it is
ableto<br /> > issue multiple-line statements, updates are only done once, you save<br /> > a lot of roundtrips,
etc.Of course it is most of the time not<br /> > compatible with database triggers, so if there are triggers the
ORM<br/> > needs to be told about them.</span><br /><br /> How about unique constraints, foreign key violations and
checks?Would <br /> you also pospone those errors to commit time? And transactions with lots of data? <br /><br /> It
doesn'treally seem like a net benefit to me, but I can see applications <br /> where it easily will fit. <br /><br />
Jesper<br/>