Tom Lane wrote:
> There's also the plan B of scanning pg_class to decide which relfilenode
> values are legit. IIRC Bruce did up a patch for this about a year ago,
> which I vetoed because I was afraid of the consequences if it removed
> data that someone really needed.
I posted a patch like that, 2-3 years ago I think. IIRC, the consensus
back then was to just write a log message of the stale files, so an
admin can go and delete them manually. That's safer than just deleting
them, and we'll get an idea of how much of a problem this is in
practice; at the moment a DBA has no way to know if there's some leaked
space, except doing a manual compare of pg_class and filesystem. If it
turns out to be reliable enough, and the problem big enough, we might
start deleting the files automatically in future releases.
I never got around to fixing the issues with the patch, but it's been
tickling me a bit for all these years.
> Someone just mentioned doing the same
> thing but pushing the unreferenced files into a "trash" directory
> instead of actually deleting them. While that answers the
> risk-of-data-loss objection, I'm not sure it does much for the goal of
> avoiding useless space consumption: how many DBAs will faithfully
> examine and clean out that trash directory?
That sounds like a good idea to me. If you DBA finds himself running out
of disk space unexpectedly, he'll start looking around. Doing a "rm
trash/*" surely seems easier and safer than deleting individual files
from base.
-- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com