Re: large duplicated files - Mailing list pgsql-novice

From Christoph Frick
Subject Re: large duplicated files
Date
Msg-id 20070817083707.GJ9296@byleth.sc-networks.de
Whole thread Raw
In response to Re: large duplicated files  ("Ryan D. Enos" <renos@ucla.edu>)
List pgsql-novice
On Fri, Aug 17, 2007 at 12:15:13AM -0700, Ryan D. Enos wrote:

> Well, I feel like the guy who goes to the doctor and then finds the
> pain is suddenly gone when he gets there.  I have discovered that my
> previously described problem was almost certainly the result of
> temporary tables that were not being dropped after a crash through an
> OBDC connection (at least I hope that's where those files were coming
> from).  However, I am still curious if anybody knows how I can find
> and destroy those tables in the even of a crash?

there are lots of scripts out there (google them) to find out which
table/index actually uses up your harddisk space. in an older
postgreslql version e.g. an index went nuts and kept growing without
reason - reindexing helped here.

if you delete lots of data also be sure to vacuum the db (depending on
your version of the db) and have enough fsm configured. do a verbose
vacuum to find out if there there are enough fsm (shows up at the end of
the report).

--
cu

Attachment

pgsql-novice by date:

Previous
From: Tom Lane
Date:
Subject: Re: ERROR: relation with OID XXXX does not exist
Next
From: "David Monarchi"
Date:
Subject: Re: ERROR: relation with OID XXXX does not exist