Igor wrote:
>
> ok...I've attached the .sql file to the message.
>
> create a new database, go to psql, and do \i z.sql, then do \d
> I tried it with the latest snapshot (970530) and I got duplicated tables
> all five times that I ran it. I suppose this might be OS-related
> somehow...
My fault! Bug in 6.0 vacuum! I had to get rid of time-travel relic
from vacuum: if one run vacuum just after update/delete, fast enough
to get vacuum transaction start time == update/delete commit time
then old tuple will not be deleted, because of htup->t_tmax == purgetime,
and may be re-incarnated by vacuum itself while shrinking relation.
Fixed. Sorry. Unfortunately, my box is not fast enough and examples
like Igor' one were not reproducable here (as I recall now, someone
was reported about tuples duplication after vacuum in 6.0)...
Also, if insert/delete transaction is in progress (this is possible
for system relations) then vacuum doesn't shrink relation.
Also, if xmax is not committed/aborted/inprogress then vacuum
StoreInvalidTransactionId(&(htup->t_xmax)): update/delete by
crashed backend.
Vadim
------------------------------