On Wed, Mar 03, 2004 at 17:22:44 -0600,
"Karl O. Pinc" <kop@meme.com> wrote:
>
> To make it fast, you'd want to keep the max(id2) value on the table
> keyed by id1. Your trigger would update the max(id2) value as well
> as alter the row being inserted. To keep from having problems with
> concurrent inserts, you'd need to perform all inserts inside
> serialized transactions. The only problem I see is that there's
> a note in the documentation that says that postgresql's serialization
> dosen't always work. Anybody know if it would work in this case?
There was a discussion about predicate locking some time ago (I think
last summer). Postgres doesn't do this and it is possible for two
parallel transactions to get results that aren't consistant with
one transaction occurring before the other. I think the particular
example was inserting some rows and then counting them in each of
two parallel transactions. The answer you get won't be the same as
if either of the two transactions occurred entirely before the other.
This might be what you are referring to.