2010/1/7 Albe Laurenz <laurenz.albe@wien.gv.at>:
> I don't know if such a thing would be easy to implement in
> PostgreSQL, but I had thought that the "standard" approach to
> implement predicate locking is like this:
>
> Whenever you touch (read) a data structure, you tag it with a lock
> that prevents anybody else from modifying the data structure in a way
> that would change your result if you perform the same operation again
> (or with SIREAD locks, it will not prevent the modification, but
> may lead to aborted transactions later).
>
> So no matter how complex the index condition is, it will boil down
> to accessing and hence locking certain parts of a table or index
> (in the case of a B*-tree, you'll probably end up locking gaps in
> the index).
That would be an "access layer based" version of predicate locking (of
which a typical implementation is the already-mentioned "next-key
locking").
The most "pure" version of predicate locking literally "locks
predicates" (i.e., conditions on rows). Conflicts are detected by
checking for "overlap" between predicates: is there a (possibly
non-existent) row that matches the same predicate? If so, and the
locks are incompatible, then a conflict arises (this would be
different in the SIREAD case, of course; I am talking about
traditional 2PL + predicate locking).
In such a pure implementation of predicate locking, the overlap
testing is be done using the algebraic properties of the conditions,
which is of course extremely difficult (if not impossible) to
implement perfectly in a system that allows conditions of arbitrary
complexity. Therefore, the conditions are typically simplified in such
a way that they become true for more rows, but are easier to check for
overlap.
Nicolas