Thread: Serializable implementation

Serializable implementation

From
"Kevin Grittner"
Date:
For perspective:  Since our current application framework is about
ten years old now, the Wisconsin State Courts System has put
together a "Long-Term Application Development Group" to review all
aspects of our development and production runtime environments.  We
started by reviewing various aspect of these environments and
addressing the question: "Ignoring for the moment how we would get
there, where would we most like to be four years from now?"
One of the elements on the list was our database environment.  We're
very happy with PostgreSQL except for one thing -- the lack of
support for serializable transaction isolation means that no
integrity rules can be relied upon except those implemented within
the database engine itself (such as foreign key constraints).  Given
the complexity of the schema, the number of programmers, and the
number of queries -- any attempt to address the problem by looking
at particular interactions between particular queries and using
explicit locking would be hard to distinguish from completely
useless.
Serializable transaction isolation is attractive for shops with
active development by many programmers against a complex schema
because it guarantees data integrity with very little staff time --
if a transaction can be shown to always do the right thing when it
is run alone (before or after any other transaction), it will always
do the right thing in any mix of concurrent serializable
transactions.  Period.
I'm going to try to ignore any suggestions that no other database
product has implemented this or that it's not useful.  While it
carries a performance cost which makes it untenable in some
situations, and isn't needed when you have only a handful of
programmers developing against only 100 tables, it has it's place
and is supported in every major database product I know *except* for
PostgreSQL.
We would like to relax our portability mandate with regard to the
database layer, and use more PostgreSQL-specific features, but are
reluctant to do so while PostgreSQL remains virtually alone in
lacking support for actual serializable transactions.  (Other MVCC
databases, like Oracle and recent versions of Microsoft SQL Server
provide snapshot isolation, but also support serializable
isolation.)  Given the benefits of using the PostgreSQL-specific
features, it appears that we might be ahead by implementing support
for serializable transaction isolation so that we can relax
portability requirements and thereby avoid developing, in our
framework, our own (portable) implementation of features available
in "native" PostgreSQL.
Between staff and contractors we have over 20 programmers working
here.  We would probably draw on that base for some of the work. 
Given the nature of the work, we might be able to find a CS grad
student somewhere who would be interested in contributing.  If there
were any contractors with PostgreSQL development experience
registered with the state's procurement agency[1], we would be very
likely to draw on them in the effort.
I would like to start new threads for any technical discussions --
what I want to discuss here is how to approach this in a way that
avoids the SE-PostgreSQL doldrums.  I know from some responses to
previous discussions of the issue, both on and off list, that there
are other PostgreSQL users who want this, so at a minimum we would
make our patch set available to others; but it would certainly be a
benefit to both us (in avoiding re-basing the patch for every major
release, as well as having extra eyes on the code), and to others
(in having it "just work" on installation, without applying a
third-party patch set) if we can do this right.
One thing which strikes me is that there are at least three fairly
well documented techniques for implementing true serializable
transaction isolation under MVCC:
(1)  Strict 2 Phase Locking (S2PL).  This is the most widely used,
by far, and probably best understood by developers.  It has the
disadvantage of creating so much blocking under some loads that it
isn't always feasible.
(2)  Optimistic Concurrency Control (OCC).  This generates almost no
overhead in some DBMSs under some read-mostly loads (although I'm
not sure whether that would be true in PostgreSQL), but degenerates
worse than S2PL under many loads with higher levels of write
contention.
(3)  Serializable Snapshot Isolation (SSI).  This relies mostly on
snapshot isolation techniques, with detection of possible conflicts
with low overhead.  It is a very new technique, appearing in the
literature less than two years ago, and only implemented so far in
two prototypes: Oracle Berkeley-DB and InnoDB.  Under many loads
performance is very close to Snapshot Isolation (SI), but the
dynamics aren't too well known yet, and there may be some lurking
pessimal cases not yet identified.  Serialization errors are higher
than in SI or S2PL, so it should only be used where the software is
prepared to deal with serialization errors in a universal and
systematic fashion, rather than hand-coding recovery on a
query-by-query basis. The more sophisticated identification of
dangerous lock structures among a set of concurrent transactions
means that the particular reason for any serialization failure may
be hard to explain to the uninitiated.
Given that each of these would be the best choice for some
transaction mixes, it might make sense to allow some mapping of the
four ANSI transaction isolation levels to strategies for
implementation.  At the risk of generating some backlash on this
particular point, it might allow a Read Committed implementation
which avoids some of the current anomalies, as a user-selectable
alternative to the current implementation.  As a hypothetical
example, one might map the ANSI Read Uncommitted mode to what
PostgreSQL currently calls Read Committed, Read Committed to a
get-a-new-snapshot strategy, Repeatable Read to SI, and Serializable
to SSI.  (Why do I feel like now would be a good time to duck?)
To give some idea of the scope of development, Michael Cahill added
SSI to InnoDB by modifying 250 lines of code and adding 450 lines of
code; however, InnoDB already had the S2PL option and the prototype
implementation isn't as sophisticated as I feel is necessary for real
production use (particularly regarding the granularity of SIREAD
locks).  I'm assuming it would take more to reach real production
quality in PostgreSQL.  My SWAG would be to multiply by two or three.
So, with all the above for perspective, assuming management approves
development of this for use in our shop, how would people recommend
that we proceed to maximize the chances that it eventually finds its
way into the PostgreSQL code base?  Keep in mind that we're not
panicking here -- we're looking at a potential two to four year time
line.
-Kevin
[1] http://vendornet.state.wi.us/vendornet/vguide/index.asp



Re: Serializable implementation

From
Jeff Davis
Date:
On Mon, 2009-12-28 at 11:54 -0600, Kevin Grittner wrote:
> Serializable transaction isolation is attractive for shops with
> active development by many programmers against a complex schema
> because it guarantees data integrity with very little staff time --

I would like to see true serializability in some form as well.

> Given that each of these would be the best choice for some
> transaction mixes, it might make sense to allow some mapping of the
> four ANSI transaction isolation levels to strategies for
> implementation.  At the risk of generating some backlash on this
> particular point, it might allow a Read Committed implementation
> which avoids some of the current anomalies, as a user-selectable
> alternative to the current implementation.  As a hypothetical
> example, one might map the ANSI Read Uncommitted mode to what
> PostgreSQL currently calls Read Committed, Read Committed to a
> get-a-new-snapshot strategy, Repeatable Read to SI, and Serializable
> to SSI.  (Why do I feel like now would be a good time to duck?)

I like the idea of moving toward using "serializable" for true
serializability, and "repeatable read" for snapshot isolation (perhaps
with a compatibility GUC existing for a while to get the old behavior).

However, I don't know what you mean by "get-a-new-snapshot strategy" or
how it is different from the current read committed behavior. We
obviously want to be careful changing the default isolation level's
behavior.

Regards,Jeff Davis



Re: Serializable implementation

From
"Florian G. Pflug"
Date:
On 28.12.09 18:54 , Kevin Grittner wrote:
> To give some idea of the scope of development, Michael Cahill added
> SSI to InnoDB by modifying 250 lines of code and adding 450 lines of
>  code; however, InnoDB already had the S2PL option and the prototype
>  implementation isn't as sophisticated as I feel is necessary for
> real production use (particularly regarding the granularity of SIREAD
> locks).  I'm assuming it would take more to reach real production
> quality in PostgreSQL.  My SWAG would be to multiply by two or
> three.

I believe the hard part of implementing true serializability is not the
actual SSI or S2PL algorithm, but rather the necessary predicate locking
strategy.

So I think checking how InnoDB tackles that and how much of it's code is
invovled might give a more realistic estimate of the effort required.

best regards,
Florian Plug


Re: Serializable implementation

From
"Kevin Grittner"
Date:
Jeff Davis <pgsql@j-davis.com> wrote:
> I don't know what you mean by "get-a-new-snapshot strategy" or
> how it is different from the current read committed behavior.
Our current Read Committed level, if it blocks on a competing UPDATE
or DELETE, can provide a view of data which is based on a mix of
snapshots, causing various anomalies.  (See the recent thread titled
"determine snapshot after obtaining locks for  first statement" that
veered off into a discussion of this issue.)
Basically, if an UPDATE or DELETE blocks on a competing UPDATE or
DELETE (within a READ COMMITTED database transaction), it will, once
it is free to proceed, find the modified version of the row on which
it was blocking.  So that will be committed data as of a later
transaction ID than other data we may be viewing, which would be
based on the snapshot from the start of the statement.  As described
in the thread, that causes a target row not to be visible unless it
is present in both snapshots, and may cause joined tables or
subquery data to be out of sync with data from target rows.
To avoid these anomalies PostgreSQL would need to roll back the
affected statement, get a new snapshot, and start over.  This would
apparently be rather messy to implement, and would probably be a bit
slower.  I, for one, would willingly pay that price for better data
integrity -- if it affected me.  Since we do all data modifications
in serializable transactions (or as close as we can come to that),
it doesn't actually affect me directly.  I just thought that others
might be interested.  Well, that and the possibility that if this
*was* implemented, it might allow for a nice optimization to reduce
serialization conflicts in true serializable mode.   :-)
> We obviously want to be careful changing the default isolation
> level's behavior.
Of course.  I've been sort of assuming that we would default to
maintaining current behavior, although that may be overly
conservative.  I certainly would not want to eliminate either of the
existing isolation mode implementations, as they are likely to be
useful performance/integrity trade-offs for a significant percentage
of the user base.
-Kevin


Re: Serializable implementation

From
Greg Stark
Date:
On Mon, Dec 28, 2009 at 11:20 PM, Florian G. Pflug <fgp@phlo.org> wrote:
> I believe the hard part of implementing true serializability is not the
> actual SSI or S2PL algorithm, but rather the necessary predicate locking
> strategy.
>
> So I think checking how InnoDB tackles that and how much of it's code is
> invovled might give a more realistic estimate of the effort required.

We've been over this a few times. The reason MySQL and Sybase/MSSQL
can tackle this is because they have very few access paths which are
closely tied with their index structures.  Postgres supports many
different access paths and indexes are plugin-replacable modules with
a limited interface which can't specify or make detailed assumptions
about the pattern in which data is accessed.

If you read a record in a query then where would you place the
annotation to prevent me from performing a HOT update against that
record? How would you stop me from deleting that same record using an
access plan that doesn't use the same index as you used to find it? If
you scan a partial index where would you put the lock to prevent me
from updating one of those records to no longer fit the partial index
where clause? What if you do a reverse index scan instead of a forward
scan? If you scan the table for the one or two records matching
several indexes using bitmap index scans where would you put the
locks? In GIN, GIST, and hash indexes are you sure there's any
reasonable place to put the locks, especially given GIN's buffered
updates?

This is the hard part. Talking about the details of the conflict
resolution algorithms and the benefits of serializable mode to your
development methodology is all fantasy as long as you don't have any
approaches to solve actually being able to detect the conflicts in the
first place without rewriting many parts and large parts of the
system.

-- 
greg


Re: Serializable implementation

From
"Kevin Grittner"
Date:
Greg Stark <gsstark@mit.edu> wrote:
> Talking about the details of the conflict resolution algorithms
> and the benefits of serializable mode to your development
> methodology is all fantasy as long as you don't have any
> approaches to solve actually being able to detect the conflicts in
> the first place without rewriting many parts and large parts of
> the system.
Agreed.  I was hoping to keep the discussion on *this* thread
entirely off of technical issues like that.  I'll start specific
threads on specific technical issues as needed.  This will all turn
into a horribly opaque jumble if we mix it all up on one thread.
-Kevin


Re: Serializable implementation

From
Bruce Momjian
Date:
Kevin Grittner wrote:
> Jeff Davis <pgsql@j-davis.com> wrote:
>  
> > I don't know what you mean by "get-a-new-snapshot strategy" or
> > how it is different from the current read committed behavior.
>  
> Our current Read Committed level, if it blocks on a competing UPDATE
> or DELETE, can provide a view of data which is based on a mix of
> snapshots, causing various anomalies.  (See the recent thread titled
> "determine snapshot after obtaining locks for  first statement" that
> veered off into a discussion of this issue.)
>  
> Basically, if an UPDATE or DELETE blocks on a competing UPDATE or
> DELETE (within a READ COMMITTED database transaction), it will, once
> it is free to proceed, find the modified version of the row on which
> it was blocking.  So that will be committed data as of a later
> transaction ID than other data we may be viewing, which would be
> based on the snapshot from the start of the statement.  As described
> in the thread, that causes a target row not to be visible unless it
> is present in both snapshots, and may cause joined tables or
> subquery data to be out of sync with data from target rows.

FYI, this behavior is documented:
http://www.postgresql.org/docs/8.4/static/transaction-iso.html#XACT-READ-COMMITTED

I know Kevin knows this but others might find the docs help clarify the
issue.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: Serializable implementation

From
"Kevin Grittner"
Date:
Florian G. Pflug <fgp@phlo.org> wrote:
> I believe the hard part of implementing true serializability is
> not the actual SSI or S2PL algorithm, but rather the necessary
> predicate locking strategy.
>
> So I think checking how InnoDB tackles that and how much of it's
> code is invovled might give a more realistic estimate of the
> effort required.
Apologies, Florian.  I never got your email, and somehow got the
impression from Greg's reply that you had led the discussion into
overly technical areas for this thread.  I dug up your email in the
archives and found that the above was the entirety of your comment,
so I'll reply to that now, rather than my misapprehension.  :-/
I agree that most of the work will be in the predicate locking area.
You're probably right that if I could isolate the lines of code
required for that in InnoDB it might get me to a more accurate
estimate of lines of code.  And as Greg points out, it is bound to
be a bit more complex in PostgreSQL than in InnoDB.
At this point, though, I'm just looking for an "order of magnitude"
sort of estimate of complexity, since even a totally accurate line
count wouldn't do more than hint at the really important metrics --
how much money it will take, and how much risk of destabilizing
existing code there is.  A certain amount of work is required to get
an accurate handle on those.  That work may start soon, but we're
not there yet.
Thanks for the input.
-Kevin


Re: Serializable implementation

From
"Kevin Grittner"
Date:
Jeff Davis <pgsql@j-davis.com> wrote:
> On Mon, 2009-12-28 at 11:54 -0600, Kevin Grittner wrote:
>> Serializable transaction isolation is attractive for shops with
>> active development by many programmers against a complex schema
>> because it guarantees data integrity with very little staff time
> 
> I would like to see true serializability in some form as well.
Would you be able to help with the development in any way?
I just had a chat with our Deputy CIO, and he said that in this next
phase, where we decide what to prioritize and what resources to
allocate:
(1)  He said that they would be looking to apply programming
resources to this rather than cash.  Any new contracting would be a
hard sell under the current budget constraints.  (Nobody's getting a
raise for the next couple years and we're all having to take unpaid
days off; apparently adding in hours from new contractors would be
viewed as subverting that effort....)  :-(
I had been assuming that contracting out for the expertise was an
option, as we've done that before; but apparently the times have
changed -- at least for now.  Apologies.
(2)  He said that a factor in authorizing even the programming time
would be whether others in the PostgreSQL community cared enough
about the issue to help with development.
We have three meetings with the CIO scheduled in January to
prioritize and assign resources to various issues identified as
beneficial over the next four year period.  Any indications of
support for this issue from the community will bolster the chances
that we invest significant programming resources in it.
-Kevin


Re: Serializable implementation

From
Jeff Davis
Date:
On Tue, 2009-12-29 at 11:50 -0600, Kevin Grittner wrote:
> Would you be able to help with the development in any way?

Yes, I'd be willing to invest some time.

For an area of active research such as this, I obviously can't promise
much in the way of results. But I can help explore and implement
designs.

Regards,Jeff Davis



Re: Serializable implementation

From
Bruce Momjian
Date:
Jeff Davis wrote:
> On Tue, 2009-12-29 at 11:50 -0600, Kevin Grittner wrote:
> > Would you be able to help with the development in any way?
> 
> Yes, I'd be willing to invest some time.
> 
> For an area of active research such as this, I obviously can't promise
> much in the way of results. But I can help explore and implement
> designs.

I certainly am interested in helping with the design, as others have as
well in the past.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: Serializable implementation

From
Jeff Davis
Date:
On Mon, 2009-12-28 at 11:54 -0600, Kevin Grittner wrote:
> Given that each of these would be the best choice for some
> transaction mixes, it might make sense to allow some mapping of the
> four ANSI transaction isolation levels to strategies for
> implementation.  At the risk of generating some backlash on this
> particular point, it might allow a Read Committed implementation
> which avoids some of the current anomalies, as a user-selectable
> alternative to the current implementation.  As a hypothetical
> example, one might map the ANSI Read Uncommitted mode to what
> PostgreSQL currently calls Read Committed, Read Committed to a
> get-a-new-snapshot strategy, Repeatable Read to SI, and Serializable
> to SSI.  (Why do I feel like now would be a good time to duck?)

I think we should document that REPEATABLE READ is SI, and encourage
people to use it rather than SERIALIZABLE if they want SI. Also,
document that SERIALIZABLE may be changed in the future to be truly
serializable, and that may have a performance penalty and cause
serialization errors more frequently.

Documenting it in 8.5 gives us a little more freedom to change the
behavior of SERIALIZABLE over the next few releases.

I also don't think we should plan to change READ COMMITTED mode at all,
even with a GUC. The only mode we should change is SERIALIZABLE. The
reason is that READ COMMITTED is now the default, and I don't think it's
realistic that we'll ever change its semantics (I suppose we could
change the default isolation mode to READ UNCOMMITTED, but I'm not sure
that's a good idea either). A compatibility GUC is one thing, but a
behavior-changing GUC that lasts forever -- and more importantly, would
default to the _old_ behavior forever -- is highly undesirable (to me at
least).

It seems much more reasonable to have a
"serializable_is_repeatable_read" GUC that defaults to true for a
release, then defaults to false, then we may even eventually eliminate
it.

Regards,Jeff Davis



Re: Serializable implementation

From
"Kevin Grittner"
Date:
Jeff Davis <pgsql@j-davis.com> wrote:
> I think we should document that REPEATABLE READ is SI, and >
> encourage people to use it rather than SERIALIZABLE if they want
> SI. Also, document that SERIALIZABLE may be changed in the future
> to be truly serializable, and that may have a performance penalty
> and cause serialization errors more frequently.
> 
> Documenting it in 8.5 gives us a little more freedom to change the
> behavior of SERIALIZABLE over the next few releases.
Could you draft a proposed doc change?  While my ideas have
sometimes influenced the docs, my words don't tend to make it, so
I'm probably not the best candidate to suggest something.  (That's
not actually a shocker for me, since I'm a visual thinker, and
getting ideas into words is a bit slow and clumsy for me.)
> I also don't think we should plan to change READ COMMITTED mode at
> all, even with a GUC. The only mode we should change is
> SERIALIZABLE.
OK.
> It seems much more reasonable to have a
> "serializable_is_repeatable_read" GUC
I'm torn between thinking it would be good to spell it that way and
thinking that we should have "serializable_isolation_implementation"
GUC (or something to that effect) which maps to an enumeration
containing "snapshot" and "ssi".  Opinions welcome, since I've put
that GUC at the top of my implementation list.  :-)
> that defaults to true for a release, then defaults to false, then
> we may even eventually eliminate it.
If we don't default it to the full serializable mode in the initial
release where that is present, I doubt we ever will.  Since most
people seem to use the default READ COMMITTED, and those who are
currently asking for SERIALIZABLE might indeed prefer to get that
(just as a guess with nothing to back it up), I'd be inclined to
argue for changing the behavior of SERIALIZABLE in the first release
where we have true serializable transactions implemented.  As for
eventually eliminating it -- who knows?  I think we should leave
that to posterity.  It really depends on application code we might
break, which is hard to determine.
-Kevin


Re: Serializable implementation

From
Jeff Davis
Date:
On Thu, 2010-01-07 at 16:27 -0600, Kevin Grittner wrote:
> Could you draft a proposed doc change?  While my ideas have
> sometimes influenced the docs, my words don't tend to make it, so
> I'm probably not the best candidate to suggest something.  (That's
> not actually a shocker for me, since I'm a visual thinker, and
> getting ideas into words is a bit slow and clumsy for me.)

Sure. I wonder how many doc-only patches are going to be in the Jan
commitfest?  ;)
> I'm torn between thinking it would be good to spell it that way and
> thinking that we should have "serializable_isolation_implementation"
> GUC (or something to that effect) which maps to an enumeration
> containing "snapshot" and "ssi".  Opinions welcome, since I've put
> that GUC at the top of my implementation list.  :-)

If there are different semantics, we shouldn't just call it an
implementation detail. Particularly when the old behavior violates the
standard (at least the newest version, I think).
> I'd be inclined to
> argue for changing the behavior of SERIALIZABLE in the first release
> where we have true serializable transactions implemented.

Ok, I don't have a strong opinion about that.

> It really depends on application code we might
> break, which is hard to determine.

Well, hopefully it doesn't break anything. Applications asking for
SERIALIZABLE should already be expecting serialization errors. Did you
have something else in mind?

Regards,Jeff Davis




Re: Serializable implementation

From
"Kevin Grittner"
Date:
Jeff Davis <pgsql@j-davis.com> wrote:
>> I'm torn between thinking it would be good to spell it that way
>> and thinking that we should have
>> "serializable_isolation_implementation" GUC (or something to that
>> effect) which maps to an enumeration containing "snapshot" and
>> "ssi".  Opinions welcome, since I've put that GUC at the top of
>> my implementation list.  :-)
> 
> If there are different semantics, we shouldn't just call it an
> implementation detail. Particularly when the old behavior violates
> the standard (at least the newest version, I think).
That is a point in favor or your spelling, for sure.  I guess I was
still trying to leave the door at least ajar for a parallel S2PL
implementation, but perhaps that belongs in the "we'll cross that
bridge when we come to it" category.  (Just so we don't burn that
bridge before we get there....)
>> It really depends on application code we might
>> break, which is hard to determine.
> 
> Well, hopefully it doesn't break anything. Applications asking for
> SERIALIZABLE should already be expecting serialization errors. Did
> you have something else in mind?
It seems that some users like to assume that most queries can't get
serialization errors and try to identify those which can and handle
the errors specifically in those cases.  In my experience that is
not wise, especially if you are trying for portable code, but it
seems to have worked reasonably well for some users historically, so
we would need a very good reason to break it for them.  I'd suggest
we plan on leaving the option indefinitely, until such time as
someone feels compelled for some reason to address the issue. 
Granted, the fix is pretty simple, but if they have their sets of
transaction isolation level scattered around their code, "simple"
wouldn't necessarily mean "quick and easy."
-Kevin


Re: Serializable implementation

From
Robert Haas
Date:
>> I'm torn between thinking it would be good to spell it that way and
>> thinking that we should have "serializable_isolation_implementation"
>> GUC (or something to that effect) which maps to an enumeration
>> containing "snapshot" and "ssi".  Opinions welcome, since I've put
>> that GUC at the top of my implementation list.  :-)

Hmm.  Why would we use a GUC for this instead of an additional option
to BEGIN TRANSACTION?

I would think:

BEGIN TRANSACTION ISOLATION LEVEL {READ COMMITTED | SNAPSHOT | SERIALIZABLE}

With our current levels being the first two of those.

Or is that a bad idea?

...Robert


Re: Serializable implementation

From
Jeff Davis
Date:
On Thu, 2010-01-07 at 21:02 -0500, Robert Haas wrote:
> Hmm.  Why would we use a GUC for this instead of an additional option
> to BEGIN TRANSACTION?

I'm with you. I feel pretty strongly that we should not have
behavior-changing GUCs.

I make an exception for compatibility GUCs where the eventual default
will be the new behavior, the vast majority of users will be using the
new behavior in a release or two, and we just have the GUC to allow
backwards compatibility for a while (and would be willing to potentially
eliminate the GUC in a few releases).

The last thing we want are 103 behavior-changing GUCs that vary widely
in their settings among users.

> BEGIN TRANSACTION ISOLATION LEVEL {READ COMMITTED | SNAPSHOT | SERIALIZABLE}
> 
> With our current levels being the first two of those.
> 
> Or is that a bad idea?

We already have REPEATABLE READ which already has the same semantics as
SNAPSHOT would have, so I don't think we need to introduce a new one.

I think the only thing we need to do is suggest that people start using
REPEATABLE READ if they want snapshot isolation. That will give us more
freedom to change SERIALIZABLE to be true serializability in 8.6.

If we need a GUC that aliases SERIALIZABLE to REPEATABLE READ for
backwards compatibility (after we change SERIALIZABLE) for a short
period of time, that sounds reasonable. Even that may not be necessary
though because it shouldn't really break any promises that we made in
the documentation (that I'm aware of).

Regards,Jeff Davis



Re: Serializable implementation

From
"Kevin Grittner"
Date:
Jeff Davis <pgsql@j-davis.com> wrote:
> On Thu, 2010-01-07 at 21:02 -0500, Robert Haas wrote:
>> Hmm.  Why would we use a GUC for this instead of an additional
>> option to BEGIN TRANSACTION?
> 
> I'm with you. I feel pretty strongly that we should not have
> behavior-changing GUCs.
OK.  I actually thought this might be useful for testing, since we
could run a test with SI and SSI without changing any code in the
test -- change the conf file, reload, and run again.  I guess,
though, that it shouldn't be too bad to find some other way to
control that, and adding a GUC for development that we rip out at
the end would be a bit silly.
I guess the question is whether we need the GUC to support people
who are asking for serializable transaction isolation but wouldn't
be prepared to deal with an increase in serialization failures from
actually *getting* serializable isolation.
>>  BEGIN TRANSACTION ISOLATION LEVEL {READ COMMITTED | SNAPSHOT |
>>  SERIALIZABLE}
>> 
>> With our current levels being the first two of those.
>> 
>> Or is that a bad idea?
> 
> We already have REPEATABLE READ which already has the same
> semantics as SNAPSHOT would have, so I don't think we need to
> introduce a new one.
Agreed.  Snapshot should clearly continue to map to REPEATABLE READ,
so I see no reason to invent a nonstandard alias for it.
> I think the only thing we need to do is suggest that people start
> using REPEATABLE READ if they want snapshot isolation. That will
> give us more freedom to change SERIALIZABLE to be true
> serializability in 8.6.
Now you're being much more optimistic than I.  I think that a year
from now we'll be working to get performance to an acceptable range,
and at the point where I feel it's usable for us there will still be
months of work to address concerns which will be raised only when it
seems that an actual commit might be imminent.  (I mean, does that
ever *not* happen on a big patch?)  That puts this, probably in the
third major release after 8.4, whatever number that gets.
Other than the optimistic release number, however, I think your
point is on target.  It could at least help justify making fully
serializable transactions the default for the GUC, and maybe justify
not even having the GUC.
> If we need a GUC that aliases SERIALIZABLE to REPEATABLE READ for
> backwards compatibility (after we change SERIALIZABLE) for a short
> period of time, that sounds reasonable. Even that may not be
> necessary though because it shouldn't really break any promises
> that we made in the documentation (that I'm aware of).
I agree that it would break no promises; but it could break existing
applications.  It would only break applications which depended on
certain assumptions -- namely that the serializable level would
specifically be snapshot isolation and based on that it would be
possible to analyze all combinations of concurrent transactions and
accurately predict which ones might roll back with a serialization
errors, and it is OK to code specifically for such errors only on
the vulnerable transactions.
The only evidence I've seen that such applications exist is the
pushback from some on this list regarding the unpredictability of
which transactions could roll back on serialization failures -- that
it could happen on transactions for which the programmer didn't
handle it.  I've assumed that this objection was based on some
knowledge that such coding was in production somewhere.  If it is,
we either need the GUC or we rationalize its absence on the basis of
having told people for two years to use REPEATABLE READ if they want
bare snapshot behavior.
Opinions?
-Kevin


Re: Serializable implementation

From
Robert Haas
Date:
On Fri, Jan 8, 2010 at 9:46 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Opinions?

I think anything you decide about how to invoke the different
isolation levels will be easy to change later to meet whatever the
consensus of the community is at that time.  I wouldn't spend any time
or energy on it now.  For purposes of your prototype patch, using
REPEATABLE READ for the current serializable and SERIALIZABLE for the
new behavior will be plenty good enough.

...Robert


Re: Serializable implementation

From
"Kevin Grittner"
Date:
Robert Haas <robertmhaas@gmail.com> wrote:

> I think anything you decide about how to invoke the different
> isolation levels will be easy to change later to meet whatever the
> consensus of the community is at that time.  I wouldn't spend any
> time or energy on it now.  For purposes of your prototype patch,
> using REPEATABLE READ for the current serializable and
> SERIALIZABLE for the new behavior will be plenty good enough.

I think the attached (so far noop) patch covers that.  This is "for
the record" for those interested; I'm not looking for review per se,
or commit.

-Kevin

P.S.  The git hints worked like a charm, and saved much flailing.

Attachment