On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:
> 2. Dead tuples don't have that much influence on scan costs either, at
> least not once they are marked as known-dead. Certainly they shouldn't
> be charged at full freight.
Yes, minor additional CPU time, but the main issue is when the dead
tuples force additional I/O.
> It's possible that there'd be some value in adding a column to pg_class
> to record dead tuple count, but given what we have now, the calculation
> in lazy_update_relstats is totally wrong.
Yes, thats the way. We can record the (averaged?) dead tuple count, but
also record the actual row count in reltuples.
We definitely need to record the physical and logical tuple counts,
since each of them have different contributions to run-times.
For comparing seq scan v index, we need to look at the physical tuples
count * avg row size, whereas when we calculate number of rows returned
we should look at fractions of the logical row count.
> The idea I was trying to capture is that the tuple density is at a
> minimum right after VACUUM, and will increase as free space is filled
> in until the next VACUUM, so that recording the exact tuple count
> underestimates the number of tuples that will be seen on-the-average.
> But I'm not sure that idea really holds water. The only way that a
> table can be at "steady state" over a long period is if the number of
> live tuples remains roughly constant (ie, inserts balance deletes).
> What actually increases and decreases over a VACUUM cycle is the density
> of *dead* tuples ... but per the above arguments this isn't something
> we should adjust reltuples for.
>
> So I'm thinking lazy_update_relstats should be ripped out and we should
> go back to recording just the actual stats.
>
> Sound reasonable? Or was I right the first time and suffering brain
> fade today?
Well, I think the original idea had some validity, but clearly
lazy_update_relstats isn't the way to do it even though we thought so at
the time.
Best Regards, Simon Riggs