Hi,
I have a 3 GB (fs based) large pgdata directory. I regularly do vacuums
every 15 minutes and vacuums with analyzing every night.
After dumping the whole db (pg_dump -c db), dropping and creating the db,
reinserting the dump and vacuuming again, my pgdata directory only contains
1 GB. The dump had no errors, all data has been saved and reinserted.
The xlogs/clogs didnt take up 2 GB, so I am wondering what has happened.
Shouldn't the vacuuming take care of this?
A (desired) sideeffect is, that the postmaster runs much faster now. Queries
get executed much faster.
If I compare the relpages from before and after, I see the difference there
also.
Any hints?
Greetings,
Bjoern