>> One possibility: vacuum already knows how many tuples it removed. We
>> could set reltuples equal to, say, the mean of the number-of-tuples-
>> after-vacuuming and the number-of-tuples-before. In a steady state
>> situation this would represent a fairly reasonable choice. In cases
>> where the table size has actually decreased permanently, it'd take a few
>> cycles of vacuuming before reltuples converges to the new value, but that
>> doesn't seem too bad.
>
> That sounds good to me. Covers all cases I can see from here.
Yes, sounds good for me also. I think that would be a good thing even if viewed
isolated from the rest of the proposal. I am sorry if I made the impression that
I don't like a change in this direction in general, I think there is need for both.
I am only worried about core OLTP applications where every query is highly tuned
(and a different plan is more often than not counter productive, especially if it
comes and goes without intervention).
>> A standalone ANALYZE should still do what it does now, though, I think;
>> namely set reltuples to its best estimate of the current value.
good, imho :-)
> A GUC-free solution...but yet manual control is possible. Sounds good to
> me - and for you Andreas, also?
It is the GUC to keep the optimizer from using the dynamic page count, that
I would still like to have.
I especially liked Simon's name for it: enable_dynamic_statistics=true
Tom wrote:
>> But I am used to applications
>> that prepare a query and hold the plan for days or weeks. If you happen to
>> create the plan when the table is by chance empty you lost.
>
> You lose in either case, since this proposal doesn't change when
> planning occurs or doesn't occur.
This is not true in my case, since I only "update statistics"/analyze when the
tables have representative content (i.e. not empty).
Andreas