Re: Vacuum VS Vacuum Analyze - Mailing list pgsql-general

From Marek Pętlicki
Subject Re: Vacuum VS Vacuum Analyze
Date
Msg-id 20010325150951.F1221@marek.almaran.home
Whole thread Raw
In response to Re: Vacuum VS Vacuum Analyze  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Vacuum VS Vacuum Analyze  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-general
On Friday, March, 2001-03-23 at 17:42:37, Tom Lane wrote:
> "Matt Friedman" <matt@daart.ca> writes:
> > I currently running vacuum nighly using cron and once in a while I run
> > vacuum analyze (as postgres).
> > Any reason why I wouldn't just simply run vacuum analyze each night?
>
> If you can spare the cycles, you might as well make every vacuum a
> vacuum analyze.

I have experienced that vacuum, especially vacuum analyze on heavily
used database sometimes seems to last forever. A very quick_and_dirty
hack is to run it twice: first time I run simple vacuum, but before that
I drop all the indices. After recreating of indices I run vacuum analyze.

The whole process runs lightning fast (the longest process is to
recreate the indices). The only problem is not to allow users to add
anything to the database, because it may end up in broken unique-key
indices. My solution to that is... temporary shutdown of services using
the database (those are helper services for my WWW application) which
simply makes my application refuse to work. The whole process is
scheduled for a deep night (about 4:00 AM) so hardly anybody can notice
;-) (it takes approx. 5 minutes)

The other solution would be not to drop the unique indices (but I don't
know the speed penalty in this case).

Question is: have I misspotted something? Is this routine of any danger
that I fail to notice?

regards

--
Marek Pętlicki <marpet@buy.pl>


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Re: HOWTO for pg 7.1 installation from cvs
Next
From: Thomas Lockhart
Date:
Subject: Re: Problem migrating dump to latest CVS snapshot.