I have a table with 1 live row that I found has 115000 dead rows in it (
from a testing run ). I'm trying to VACUUM FULL the table and it has
run for over 18 hours without completion. Considering the hardware on
this box and the fact that performance seems reasonable in all other
aspects, I'm confused as to why this would happen. The database other
than this table is quite large ( 70 gigs on disk ) and I would expect to
take days to complete but I just did 'vacuum full table_stats'. That
should only do that table, correct? I'm running 8.0.3.
Table "public.table_stats"
Column | Type | Modifiers
---------------------+-----------------------------+-----------
count_cfs | integer |
count_ncfs | integer |
count_unitactivity | integer |
count_eventactivity | integer |
min_eventmain | timestamp without time zone |
max_eventmain | timestamp without time zone |
min_eventactivity | timestamp without time zone |
max_eventactivity | timestamp without time zone |
geocoding_hitrate | double precision |
recent_load | timestamp without time zone |
count_eventmain | integer |
This is the table structure.
Any ideas where to begin troubleshooting this?
Thanks.