> > I haven't been paying much attention, but I imagine that what's really
> > going on here is that once vacuum has collected all the still-good
> > tuples at the front of the relation, it doesn't bother to go through
> > the remaining blocks of the relation and mark everything dead therein?
> > It just truncates the file after the last block that it put tuples into,
> > right?
> >
> > If this procedure works correctly for vacuuming a simple one-segment
> > table, then it would seem that truncation of all the later segments to
> > zero length should work correctly.
>
> Not sure about that. When we truncate single segment file, the table is
> being destroyed, so we invalidate it in the catalog cache and tell other
> backends. Also, we have a problem with DROP TABLE in a transaction
> while others are using it as described by a bug report a few days ago,
> so I don't think we have that 100% either.
>
> That is interesting. I never thought of that. Hiroshi, can you test
> that idea? If it is the non-existance of the file that other backends
> are checking for, my earlier idea of rename() with truncated file kept
> in place may be better.
>
> Also, I see why we are not getting more bug reports. They only get this
> when the table looses a segment, so it is OK to vacuum large tables as
> long as the table doesn't loose a segment during the vacuum.
OK, this is 100% wrong. We truncate from vacuum any time the table size
changes, and vacuum of large tables will fail even if not removing a
segment. I forgot vacuum does this to reduce disk table size.
I wonder if truncating a file to reduce its size will cause other table
readers to have problems. I though vacuum had an exlusive lock on the
table during vacuum, and if so, why are other backends having troubles?
-- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026