Jim Nasby <jimn@enterprisedb.com> writes:
> On Sep 14, 2006, at 9:16 PM, Golden Liu wrote:
>> I try to solve this problem this way:
>> First, update the table t but DON'T update the index.
>> Next, find all the tuples updated by this command and insert them into
>> the unique index.
> I suspect that your change adds a non-trivial overhead, which means
> we don't want it to be the normal case.
There's a bigger problem:
begin;update tab set col1 = ... where unique_key = ...;update tab set col2 = ... where unique_key = ...;commit;
If the first update doesn't insert index entries into unique_key's
index, then the second update won't find the tuples it needs to update
(unless we hack the planner to not trust the index as valid ... and
then it'd fall back on a seqscan, which is hardly acceptable anyway).
The scheme that I've thought about involves inserting index entries as
usual, but instead of having the aminsert code error out immediately
upon finding a duplicate, have it make an entry in a list of things
that need to be rechecked before commit. This wins as long as potential
conflicts are uncommon. Performance could suck if the list gets too
large --- but we have more or less the same hazard now for foreign-key
checks, and it mostly works well enough. (In fact, maybe the existing
deferred trigger event list is the thing to use for the deferred
conflict rechecks.)
regards, tom lane