On Thu, 2010-12-30 at 21:02 -0500, Tom Lane wrote:
> How is an incremental ANALYZE going to work at all?
How about a kind of continuous analyze ?
Instead of analyzing just once and then drop the intermediate results,
keep them on disk for all tables and then piggyback the background
writer (or have a dedicated process if that's not algorithmically
feasible) and before writing out stuff update the statistics based on
the values found in modified buffers. Probably it could take a random
sample of buffers to minimize overhead, but if it is done by a
background thread the overhead could be minimal anyway on multi-core
systems.
Not sure this makes sense at all, but if yes it would deliver the most
up to date statistics you can think of.
Cheers,
Csaba.