Re: determine snapshot after obtaining locks for first statement - Mailing list pgsql-hackers

From Kevin Grittner
Subject Re: determine snapshot after obtaining locks for first statement
Date
Msg-id 4B2A0F24020000250002D6EB@gw.wicourts.gov
Whole thread Raw
In response to Re: determine snapshot after obtaining locks for first statement  (Greg Stark <gsstark@mit.edu>)
List pgsql-hackers
Greg Stark <gsstark@mit.edu> wrote: 
> So I for multi-statement transactions I don't see what this buys
> you.
Well, I became interested when Dr. Cahill said that adding this
optimization yielded dramatic improvements in his high contention
benchmarks.  Clearly it won't help every load pattern.
> You'll still have to write the code to retry, and postgres
> retrying in the cases where it can isn't really going to be a
> whole lot better.
In my view, any use of a relational database always carries with it
the possibility of a serialization error.  In other database
products I've run into situations where a simple SELECT at READ
COMMITTED can result in a serialization failure, so in my view all
application software should use a framework capable of recognizing
and automatically recovering from these.  I just try to keep them to
a manageable level.
> people might write a single-statement SQL transaction and not
> bother writing retry logic and then be surprised by errors.
As has often been said here -- you can't always protect people from
their own stupidity.
> I'm unclear why serialization failures would be rare.
Did I say that somewhere???
> It seems better to report the situation to the user all the time
> since they have to handle it already and might want to know about
> the problem and implement some kind of backoff
The point was to avoid a serialization failure and its related
rollback.  Do you think we should be reporting something to the
users every time a READ COMMITTED transaction blocks and then picks
the updated row?  (Actually, given that the results may be based on
an inconsistent view of the database, maybe we should....)
> This isn't the first time that we've seen advantages that could be
> had from packaging up a whole transaction so the database can see
> everything the transaction needs to do. Perhaps we should have an
> interface for saying you're going to feed a series of commands
> which you want the database to repeat for you verbatim
> automatically on serialization failures. Since you can't construct
> the queries based on the results of previous queries the database
> would be free to buffer them all up and run them together at the
> end of the transaction which would allow the other tricky
> optimizations we've pondered in the past as well.
How is that different from putting the logic into a function and
retrying on serialization failure?  Are you just proposing a more
convenient mechanism to do the same thing?
-Kevin


pgsql-hackers by date:

Previous
From: Scott Bailey
Date:
Subject: Re: Range types
Next
From: Tom Lane
Date:
Subject: Re: determine snapshot after obtaining locks for first statement