Re: Large pgstat.stat file causes I/O storm - Mailing list pgsql-hackers

From Cristian Gafton
Subject Re: Large pgstat.stat file causes I/O storm
Date
Msg-id Pine.LNX.4.64.0801291557510.19796@alienpad.rpath.com
Whole thread Raw
In response to Re: Large pgstat.stat file causes I/O storm  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Large pgstat.stat file causes I/O storm  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Tue, 29 Jan 2008, Tom Lane wrote:

> (Pokes around in the code...)  I think the problem here is that the only
> active mechanism for flushing dead stats-table entries is
> pgstat_vacuum_tabstat(), which is invoked by a VACUUM command or an
> autovacuum.  Once-a-day VACUUM isn't gonna cut it for you under those
> circumstances.  What you might do is just issue a VACUUM on some
> otherwise-uninteresting small table, once an hour or however often you
> need to keep the stats file bloat to a reasonable level.

I just ran a vacuumdb -a on the box - the pgstat file is still >90MB in 
size. If vacuum is supposed to clean up the cruft from pgstat, then I 
don't know if we're looking at the right cruft - I kind of expected the 
pgstat file to go down in size and the I/O storm to subside, but that is 
not happening after vacuum.

I will try to instrument the application to record the oids of the temp 
tables it creates and investigate from that angle, but in the meantime is 
there any way to reset the stats collector altogether? Could this be a 
corrupt stat file that gets read and written right back on every loop 
without any sort of validation?

Thanks,

Cristian
-- 
Cristian Gafton
rPath, Inc.



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Transition functions for SUM(::int2), SUM(::int4, SUM(::int8])
Next
From: Ron Mayer
Date:
Subject: Re: [PATCHES] Proposed patch: synchronized_scanning GUCvariable