On Thu, 27 Feb 2003, Shridhar Daithankar wrote:
> On 24 Feb 2003 at 15:53, Ericson Smith wrote:
> > Currently our database is at about 15gigs. Over the period of a month,
> > it grows to about 25gigs (10Gb wasted space). Every month we have a
> > cleanup routine which involves copying the most actively updated tables
> > to text, and importing the data again. We vacuum every night and analyze
> > 4 times per day, but we're loath to do a VACUUM FULL because of the
> > table locking issues (locking some of the tables would break the
> > operation of some of our 24/7 systems), hence we prefer to stop the db
> > about once per month, eat the downtime as scheduled (about 1.5 hours),
> > and get back to business for the next 30 days again.
>
> We ahd a discussion on this few days back and the solution might work as well
> for you(apart from suggestions you have already received).
>
> Instead of vacuum full on a table, backup the table to a dump file, drop it and
> recreate it. It takes more efforts than simple vacuum full but may run much
> faster if you have large amount of space to recover.
Another option is to do something like:
begin;
select * into temp_table from bigtable;
delete from big_table;
insert into big_table (select * from temp_table);
commit;
This way your table is online while you're doing this, and can still be
selected by various clients without interruption.