Stuart Rison <rison@biochemistry.ucl.ac.uk> writes:
> Now I've sussed this out as the update must be dealing with rows where b=4
> first, trying to UPDATE them to b=3 and failing because these are already
> 'occupied'.
> Is this a bug? is this a feature? is there a way of telling the UPDATE
> what order to UPDATE in?
It's a bug, but I doubt it's likely to get fixed any time soon, since
unique indexes would get a LOT slower if they did this "right". As you
say, the update must be hitting some of the b=4 rows first. At that
point, the system has no idea that the existing b=3 rows are going to be
invalidated in the same transaction, so its uniqueness check fails. To
do this right, the uniqueness check would have to be applied at the end
of the transaction, which would mean remembering every tuple that's been
inserted/updated in the current transaction and going back to check its
uniqueness.
There is a running discussion on the hackers list about implementing
referential integrity constraints, which have this same problem that
you can't really apply (some kinds of) checks until you see the entire
transaction results. It'll get done but it's not simple. I'm not
sure whether anyone is thinking of applying that mechanism to unique-
index checks...
regards, tom lane