Moved to -hackers where this belongs :)
On Fri, 5 Nov 2004, Justin Clift wrote:
> Tom Lane wrote:
> <snip>
>> Yup. 20000 < 23072, so you're losing some proportion of FSM entries.
>> What's worse, the FSM relation table is maxed out (1000 = 1000) which
>> suggests that there are relations not being tracked at all; you have
>> no idea how much space is getting leaked in those.
>>
>> You can determine the number of relations potentially needing FSM
>> entries by
>> select count(*) from pg_class where relkind in ('r','i','t');
>> --- sum over all databases in the cluster to get the right result.
>>
>> Once you've fixed max_fsm_relations, do vacuums in all databases, and
>> then vacuum verbose should give you a usable lower bound for
>> max_fsm_pages.
>
> Would making max_fsm_relations and max_fsm_pages dynamically update
> themselves whilst PostgreSQL runs be useful? Sounds like they're the
> kind of things that many people would receive maximum benefit if
> PostgreSQL altered these settings as needed itself.
I'm not sure if I like this one too much ... but it would be nice if
something like this triggered a warning in the logs, maybe a feature of
pg_autovacuum itself?
----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664