Re: true serializability and predicate locking - Mailing list pgsql-hackers

From Greg Stark
Subject Re: true serializability and predicate locking
Date
Msg-id 407d949e1001071317x5da30b37ya1872be5cce61f8c@mail.gmail.com
Whole thread Raw
In response to Re: true serializability and predicate locking  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: true serializability and predicate locking  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-hackers
On Thu, Jan 7, 2010 at 8:43 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> No, it's an attempt to reflect the difference in costs for true
> serializable transactions, so that the optimizer can choose a plan
> appropriate for that mode, versus some other.  In serializable
> transaction isolation there is a higher cost per tuple read, both
> directly in locking and indirectly in increased rollbacks; so why
> lie to the optimizer about it and say it's the same?

This depends how you represent the predicates. If you represent the
predicate by indicating that you might have read any record in the
table -- i.e. a full table lock then you would have very low overhead
per-tuple read, effectively 0. The chances of a serialization failure
would go up but I don't see how to represent that as a planner cost.

But this isn't directly related to the plan in any case. You could do
a full table scan but record in the predicate lock that you were only
interested in records with certain constraints. Or you could do an
index scan but decide to represent the predicate lock as a full table
lock anyways.



--
greg


pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Hot Standy introduced problem with query cancel behavior
Next
From: Simon Riggs
Date:
Subject: Re: 8.5alpha3 hot standby crash report (DatabasePath related?)