Re: Interpreting vacuum verbosity - Mailing list pgsql-general

From Ed L.
Subject Re: Interpreting vacuum verbosity
Date
Msg-id 200405101137.28730.pgsql@bluepolka.net
Whole thread Raw
In response to Re: Interpreting vacuum verbosity  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Interpreting vacuum verbosity
Re: Interpreting vacuum verbosity
List pgsql-general
On Friday May 7 2004 12:48, Tom Lane wrote:
> "Ed L." <pgsql@bluepolka.net> writes:
> > 2)  Would this low setting of 10000 explain the behavior we saw of
> > seqscans of a perfectly analyzed table with 1000 rows requiring
> > ridiculous amounts of time even after we cutoff the I/O load?
>
> Possibly.  The undersized setting would cause leakage of disk space
> (that is, new rows get appended to the end of the table even when space
> is available within the table, because the system has "forgotten" about
> that space due to lack of FSM slots to remember it in).  If the physical
> size of the table file gets large enough, seqscans will take a long time
> no matter how few live rows there are.  I don't recall now whether your
> VACUUM VERBOSE results showed that the physical table size (number of
> pages) was out of proportion to the actual number of live rows.  But it
> sure sounds like that might have been the problem.

If it were indeed the case that we'd leaked a lot of diskspace, then after
bumping max_fsm_pages up to a much higher number (4M), will these pages
gradually be "remembered" as they are accessed by autovac and or queried,
etc?  Or is a dump/reload or 'vacuum full' the only way?  Trying to avoid
downtime...


pgsql-general by date:

Previous
From: Ivan Sergio Borgonovo
Date:
Subject: nested elseif woes
Next
From: "Ed L."
Date:
Subject: Re: Interpreting vacuum verbosity