On Tue, 11 May 1999, Tom Lane wrote:
> I think there are two known issues right now: the VACUUM one and
> something about DROP TABLE neglecting to delete the additional files
> for a multi-segment table. To my mind the VACUUM problem is a "must
> fix" because you can't really live without VACUUM, especially not on
> a huge database. The DROP problem is less severe since you could
> clean up by hand if necessary (not that it shouldn't get fixed of
> course, but we have more critical issues to deal with for 6.5).
I have been looking at the code for dropping the table. The code in
mdunlink() seems to be correct, and *should* work. Of course it don't, so
I'll do some more testing tonight and hopefully I can figure out why it
doesn't work.
As far as the VACUUM problem, I still haven't seen that. I have a couple
~3GB tables, with one growing to 5-6GB in the next month or so. VACUUM
runs just fine on both.
I just got to thinking, what about indexes > 2GB? With my 3GB table one
of the index is 540MB. Both with growth I might get there. Does that
work and does it use RELSEG_SIZE?
I would guess that the same function(mdunlink, mdcreate, etc) is called
for all DROPs and CREATEs(through DestroyStmt, CreateStmt)? I don't
understand postgres good enough to answer that for sure(but it does make
sense).
Ole Gjerde