Mats Lofkvist <mal@algonet.se> writes:
> tgl@sss.pgh.pa.us (Tom Lane) writes:
>> 7.3 will be smarter about this.
> Seems like I get the same behaviour with 7.3 beta1, updating
> the same row ~20k times and then 1k times more with profiling
> enabled (and with no vacuum in between) gives:
> 2.72 166.12 1002/1002 _bt_doinsert [17]
> [18] 53.7 2.72 166.12 1002 _bt_check_unique [18]
> 15.81 149.01 21721926/21721926 _bt_isequal [19]
> 0.05 1.00 221414/412979 _bt_getbuf [40]
> 0.01 0.21 221414/409772 _bt_relbuf [91]
> 0.01 0.02 2709/6241 heap_fetch [187]
Yeah, but you'll notice there is no heap_fetch for most of 'em, unlike
before...
The loop in _bt_check_unique tests _bt_isequal before checking
ItemIdDeleted, so the short-circuit for deleted items doesn't serve to
reduce the number of key comparisons, only the number of heap tuple
fetches. I do not think that reversing this logic would be a net
improvement in typical cases: we want to fall out of the loop as soon as
we've run off the set of equal keys, whether the current index entry is
dead or alive. If we switched the test order then we'd not get out of
the loop until we found a live entry that's bigger than the insertion
key.
> (In my case, I think the call to _bt_check_unique could be
> avoided altogether since the update isn't changing any of
> the columns present in the unique key.
It's fairly difficult for the index AM to know that; in general we don't
have access to the old tuple to check, at the time the index update is
running.
regards, tom lane