large duplicated files - Mailing list pgsql-novice

From Ryan D. Enos
Subject large duplicated files
Date
Msg-id 46C54144.3080009@ucla.edu
Whole thread Raw
Responses Re: large duplicated files  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-novice
Hi,
I am very new to postgresql and am not really a programmer of any type.
I use pgsql to manage very large voter databases for political science
research.  My problem is that my database is creating large duplicate
files, i.e.: 17398.1, 17398.2, 17398.3, etc.  Each is about 1g in size.
I understand that each of these is probably a part of a file that pgsql
created because of a limit on file size and that they may be large
indexes.  However, I don't know where these files came from or how to
reclaim the disk space.
I have extensively searched the archives and found that I am not the
first to have this problem.  I have followed the suggestions to previous
posters, using a VACUUM FULL command and REINDEX.  But nothing reclaims
the disk space.  I have tried to see the type of file by using:
select * from pg_class where relfilenode =""
but this returns 0 rows.
How can I reclaim this space and prevent these files from being created
in the future?
Any help would be greatly appreciated.
Thanks.
Ryan

pgsql-novice by date:

Previous
From: Jon Jensen
Date:
Subject: Re: rogue process maxing cpu and unresponsive to signals
Next
From: "Ryan D. Enos"
Date:
Subject: Re: large duplicated files