Re: Potential autovacuum optimization: new tables - Mailing list pgsql-hackers

From Josh Berkus
Subject Re: Potential autovacuum optimization: new tables
Date
Msg-id 5078CDEF.1080709@agliodbs.com
Whole thread Raw
In response to Re: Potential autovacuum optimization: new tables  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> [ shrug... ]  You're attacking a straw man, or more precisely putting
> words into my mouth about what the percentage-based thresholds might be.
> Notice the examples I gave involved update percentages quite far north
> of 100%.  It's possible and maybe likely that we need a sliding scale.

Yes, or a logarithmic one.

> Also, I don't necessarily accept the conclusion you seem to be drawing,
> that it's okay to have complete turnover of a small table and not redo
> its stats.  

I'm not drawing that conclusion.  I'm explaining the logic of
autovacuum_analyze_threshold.   That logic actually works pretty well
for tables between 200 rows and 200,000 rows.  It's outside of those
boundaries where it starts to break down.

> The increased number of knobs may be a problem, but I don't think we can
> avoid having more.  Your own complaint is that the current design is too
> simplistic.  Replacing it with a different but equally simplistic design
> probably won't help much.

Well, we could do something which involves no GUCS at all, which would
be my favorite approach.  For example, Frost and I were discussing this
on IRC.  Imagine if autovac threshold were set according to a simple log
function, resulting in very small tables getting analyzed after 100%
changes, and very large tables getting analyzed after 0.1% changes, and
everyone else between?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Potential autovacuum optimization: new tables
Next
From: Stephen Frost
Date:
Subject: Re: Potential autovacuum optimization: new tables