"Bjoern Metzdorf" <bm@turtle-entertainment.de> writes:
> Hi,
>
> I have a 3 GB (fs based) large pgdata directory. I regularly do vacuums
> every 15 minutes and vacuums with analyzing every night.
>
> After dumping the whole db (pg_dump -c db), dropping and creating the db,
> reinserting the dump and vacuuming again, my pgdata directory only contains
> 1 GB. The dump had no errors, all data has been saved and reinserted.
>
> The xlogs/clogs didnt take up 2 GB, so I am wondering what has happened.
>
> Shouldn't the vacuuming take care of this?
>
> A (desired) sideeffect is, that the postmaster runs much faster now. Queries
> get executed much faster.
>
> If I compare the relpages from before and after, I see the difference there
> also.
>
> Any hints?
>
> Greetings,
> Bjoern
Chances are good that your indexes are growing out of control. If you
have tables with a lot of turnover that is almost certainly the
problem. Shaun Thomas has written a script that will reindex your
database. It works very well, but it does lock tables, so it might
not be appropriate for your environment.
The script was posted to this list (a search for reindex turned it up
in my local mirror of the mailing list). If you can't find it feel
free to contact me.
Jason