Re: [GENERAL] Postgres stats collector showing high disk I/O - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: [GENERAL] Postgres stats collector showing high disk I/O
Date
Msg-id 1274390635-sup-4816@alvh.no-ip.org
Whole thread Raw
Responses Re: [GENERAL] Postgres stats collector showing high disk I/O  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Excerpts from Justin Pasher's message of jue may 20 16:10:53 -0400 2010:

> Whenever I clear out the stats for all of the databases, the file
> shrinks down to <1MB. However, it only takes about a day for it to get
> back up to ~18MB and then the stats collector process start the heavy
> disk writing again. I do know there are some tables in the database that
> are filled and emptied quite a bit (they are used as temporary "queue"
> tables). The code will VACUUM FULL ANALYZE after the table is emptied to
> get the physical size back down and update the (empty) stats. A plain
> ANALYZE is also run right after the table is filled but before it starts
> processing, so the planner will have good stats on the contents of the
> table. Would this lead to pg_stat file bloat like I'm seeing? Would a
> CLUSTER then ANALYZE instead of a VACUUM FULL ANALYZE make any
> difference? The VACUUM FULL code was setup quite a while back before the
> coders knew about CLUSTER.

I wonder if we should make pgstats write one file per database (plus one
for shared objects), instead of keeping everything in a single file.
That would reduce the need for reading and writing so much.

--

pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Unexpected data beyond EOF during heavy writes
Next
From: Rosser Schwarz
Date:
Subject: Re: Unexpected data beyond EOF during heavy writes