Tom,
I have just rebuilded some of the indexes, with the result than space has
been recouped (about 15GB). Thanks for the tip.
I'm trying to vacuum the tables, but the pg_xlog is gettting very big
(filling up the harddrive). As an estimate, how much space do you need to
vacuum a 30GB table (data + indexes) ?
Thanks
Robert
Tom Lane
<tgl@sss.pgh. To: Robert.Farrugia@go.com.mt
pa.us> cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Requirements for a database server
18/07/2001
17:48
Robert.Farrugia@go.com.mt writes:
> I have been using postgres for the last year now. The database has grown
> from a mere few MBs to over 100GB data and expected to top up 300GB by
the
> end of the year. Lately performance of queries, inserts, updates has
> continued to grow worse as the dataset has grown larger, even though most
> queries have indexes on them, while vacuuming the database has become a
> nightmare.
Have you tried dropping and rebuilding the indexes?
Currently, PG doesn't reclaim dead space in indexes very effectively,
so the indexes on a frequently-updated table tend to grow without bound.
(I may or may not be able to fix this for 7.2 --- it's next on my
to-look-at list, but no promises.) In the meantime, an occasional
rebuild may help restore performance.
BTW, the vacuuming issue is pretty well fixed for 7.2 ...
regards, tom lane