> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: Friday, June 18, 1999 12:54 AM
> To: Bruce Momjian
> Cc: PostgreSQL-development; Inoue@tpf.co.jp
> Subject: Re: [HACKERS] tables > 1 gig
>
>
> Bruce Momjian <maillist@candle.pha.pa.us> writes:
> >> I think what we ought to do is finish working out how to make
> mdtruncate
> >> safe for concurrent backends, and then do it. That's the right
> >> long-term answer anyway.
>
> > Problem is, no one knows how right now. I liked unlinking every
> > segment, but was told by Hiroshi that causes a problem with concurrent
> > access and vacuum because the old backends still think it is there.
>
> I haven't been paying much attention, but I imagine that what's really
> going on here is that once vacuum has collected all the still-good
> tuples at the front of the relation, it doesn't bother to go through
> the remaining blocks of the relation and mark everything dead therein?
> It just truncates the file after the last block that it put tuples into,
> right?
>
> If this procedure works correctly for vacuuming a simple one-segment
> table, then it would seem that truncation of all the later segments to
> zero length should work correctly.
>
> You could truncate to zero length *and* then unlink the files if you
> had a mind to do that, but I can see why unlink without truncate would
> not work reliably.
>
Unlinking unused segments after truncating to zero length may cause
the result such as
Existent backends write to the truncated file to extend the relation while new backends create a new segment
fileto extend the relation.
Comments ?
Regards.
Hiroshi Inoue
Inoue@tpf.co.jp