On Friday May 7 2004 11:25, Tom Lane wrote:
> "Ed L." <pgsql@bluepolka.net> writes:
> > No, our autovac logs the number of changes (upd+del for vac,
> > upd+ins+del for analyze) on each round of checks, and we can see it was
> > routinely performing when expected. The number of updates/deletes just
> > far exceeded the thresholds. Vac threshold was 2000, and at times
> > there might be 300,000 outstanding changes in the 10-30 minutes between
> > vacuums.
>
> Well, in that case you probably want a lot less than "10-30 minutes"
> between vacuums, at least for this particular table. I don't know how
> one configures autovac for this, but I suppose it can be done ...
This period is the minimum time it takes to vacuum or analyze every table
that "needs it" in round-robin fashion. Sometimes it is much shorter
(seconds), sometimes longer, depending on how much upd/del/ins activity
there has been. That seems too long/slow.
> > max_fsm_relations = 1000 and max_fsm_pages = 10000.
>
> Also you doubtless need max_fsm_pages a lot higher than that. A
> conservative setting would make it as big as your whole database,
> eg for a 10Gb disk footprint use 10Gb/8K (something upwards of
> a million) FSM page slots.
Ah, OK. Two questions:
1) I'm inclined to set this to handle as large a DB footprint as will be in
the coming year or two, so maybe 3X what it is now. What is the
impact/cost of setting max_fsm_pages at, say, 3M for an 8GB footprint? (3
x 8GB/8K)
2) Would this low setting of 10000 explain the behavior we saw of seqscans
of a perfectly analyzed table with 1000 rows requiring ridiculous amounts
of time even after we cutoff the I/O load?