Rob Bamber wrote:
> Another thought -
>
> We had a similar issue recently. Our support guys dropped the database and
> then rebuilt it from a dump file. The size of the data directory went down
> from 12GB to less than 2GB. According to the sys ad that did the work
> postgres is not very good a reclaiming disk space after large quantities of
> tuples are deleted over time.
>
> HTH
>
That's because you need to 'VACUUM [FULL | ANALYZE]' the database
frequently. For example, in our case there are about 2000
updates/inserts and around 50 'deletes' every 3 minutes . 'Update' and
'delete' operations are most expensive (in terms of disk space
utilization) as they tend to keep the updated/deleted tuples for a
longer period of time, until VACUUM is run. I've set it up so that a
daemon runs VACUUM ANALYZE every 6 minutes or so, and then a full
vacuum, i.e. VACUUM FULL ANALYZE, every 30 mins. As far as I know, in
postgresql 7.4, a simple VACUUM operation doesn't block read operations
and so occassianlly running VACUUM on the most 'popular' tables would be
a good idea. I'm sure in your case the frequency of calling vacuum could
vary, but that is basically why the database reduced to 2GB from 12GB.
VACUUM basically deletes unwanted tuples and indexes and so 'compresses'
the amount of disk space used (and so effectively speeding up queries
two to three orders of magnitude).
Steve