David Gould wrote:
>
> >
> > > Vacuum deletes index tuples before deleting heap ones...
> >
> > Right, but until you've done a vacuum, what's stopping the code from
> > returning wrong tuples? I assume this stuff actually works, I just
> > couldn't see where the dead index entries get rejected.
> >
>
> Without checking the code, I suspect that dead rows are visible though the
> index (they had to be to make time travel work), but do not match the time
> qual so are not "seen".
Yes. Backend sees that xmax of heap tuple is committed and
don't return tuple...
BTW, I've fixed SUBJ. Scan adjustment didn't work when
index page was splitted. I get rid of ON INSERT adjustment
at all: now backend uses heap tid of current index tuple to
restore current scan position before searching for the
next index tuple. (This will also allow us unlock index
page after we got index tuple and work in heap and so
index readers will not block writers ... when LLL
will be implemented -:).
The bug was more serious than "non-functional update"
when backend read index tuples twice: in some cases
scan didn't return good tuples at all!
drop table bt;
create table bt (x int);
copy bt from '/var/home/postgres/My/Btree/ADJ/UNIQ';
-- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- 1000 records with x in [1,1000]
--
create index bti on bt (x);
update bt set x = x where x <= 200;
update bt set x = x where x > 200 and x <= 210;
--
-- ONLY 4 tuples will be updated by last update!
--
I'll prepare patch for 6.3...
Vadim