> I know this topic has been rehashed a million times, but I just wanted to
> add one datapoint. I have a database (150 tables, less than 20K tuples
> in any one table) which I 'vacuum analyze'*HOURLY*, blocking all access,
> and I still see frequent situations where my query times bloat by roughly
> 300% (4 times slower) in the intervening time between vacuums. All this
> is to say that I think a more strategic implementation of the
> functionality of vacuum analyze (specifically, non-batched, automated,
> on-the-fly vacuuming/analyzing) would be a major "value add". I haven't
> educated myself as to the history of it, but I do wonder why the
> performance focus is not on this. I'd imagine it would be a performance
> hit (which argues for making it optional), but I'd gladly take a 10%
> performance hit over the current highly undesireable degradation. You
> could do a whole lotta optimization on the planner/parser/executor and
> not get close to the end-user-perceptible gains from fixing this
> problem...
>
Vadim is planning over-write storage manager in 7.2 which will allow
expired tuples to be reunsed without vacuum.
Or is the ANALYZE the issue for you? You need hourly statistics?
-- Bruce Momjian | http://www.op.net/~candle pgman@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026