On Thu, Sep 1, 2011 at 6:38 AM, Rik Bellens <rik.bellens@telin.ugent.be> wrote:
> Op 01-09-11 14:22, Scott Marlowe schreef:
>> Yeah, could be. Take a look at this page:
>> http://wiki.postgresql.org/wiki/Show_database_bloat and see if the
>> query there sheds some light on your situ.
>
> thanks for this answer
>
> if i run the query, I get 12433752064 wasted bytes on stats_count_pkey, so I
> suppose that is the reason
Also look into installing something like nagios and the
check_postgresql.pl plugin to keep track of these things before they
get out of hand.
csb time: Back in the day when pg 6.5.3 and 7.0 was new and
interesting, I had a table that was 80k or so, and an index that was
about 100M. Back when dual core machines were servers, and 1G ram was
an extravagance. I had a process that deleted everything from the
table each night and replaced it, and the index was so huge that
lookups were taking something like 10 seconds each. A simple drop /
create index fixed it right up. The check_postgresql.pl script is a
god sent tool to keep your db healthy and happy.