Re: Interpreting vacuum verbosity - Mailing list pgsql-general

From Ed L.
Subject Re: Interpreting vacuum verbosity
Date
Msg-id 200405071032.05479.pgsql@bluepolka.net
Whole thread Raw
In response to Re: Interpreting vacuum verbosity  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Interpreting vacuum verbosity  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On Friday May 7 2004 9:09, Tom Lane wrote:
> "Ed L." <pgsql@bluepolka.net> writes:
> > I guess the activity just totally outran the ability of autovac to keep
> > up.
>
> Could you have been bit by autovac's bug with misreading '3e6' as '3'?
> If you don't have a recent version it's likely to fail to vacuum large
> tables often enough.

No, our autovac logs the number of changes (upd+del for vac, upd+ins+del for
analyze) on each round of checks, and we can see it was routinely
performing when expected.  The number of updates/deletes just far exceeded
the thresholds.  Vac threshold was 2000, and at times there might be
300,000 outstanding changes in the 10-30 minutes between vacuums.

Given the gradual performance degradation we saw over a period of days if
not weeks, and the extremely high numbers of unused tuples, I'm wondering
if there is something like a data fragmentation problem occurring in which
we're having to read many many disk pages to get just a few tuples off each
page?  This cluster has 3 databases (2 nearly idle) with a total of 600
tables (about 300 in the active database).  Gzipped dumps are 1.7GB.
max_fsm_relations = 1000 and max_fsm_pages = 10000.  The pattern of ops is
a continuous stream of inserts, sequential scan selects, and deletes.


pgsql-general by date:

Previous
From: Shachar Shemesh
Date:
Subject: Re: How can I do conditional 'drop table' in Postgres
Next
From: Mark Harrison
Date:
Subject: any experience with multithreaded pg apps?