Thread: special table queries slow until dump/restore

special table queries slow until dump/restore

From
Damon Hart
Date:
Hi all -

I am experiencing continually degrading performance on queries run
against the special system tables. I notice the slowdown when these
meta-data queries are run implicitly "behind the scenes" such as when
psql with readline support tries to complete a table name or pg_admin
retrieves table column info to populate the GUI headers. Performance
picked up dramatically when I had to dump/restore this weekend. I have
not seen any comparable performance problems which might suggest a more
general issue than just the system tables.

The PostgreSQL database cluster involved consists of a couple of
databases, one of which represents effectively all the data. It is
fairly unremarkable in size (10-15 GB disk space consumed in
/var/lib/pgsql in roughly 120 tables, all in the public schema) and the
only thing which might set it apart would be that several update
processes are constantly contributing new data by way of creating and
dropping temporary tables (and possibly inflating some namespace?) My
configuration is PostgreSQL 7.4.1 on Linux 2.4.22 (Fedora Core 1) i386.

I did not find anything directly relevant in the mailing list archives,
thus this post. Is there some VACUUM command or option I should be
running which operates on the special tables? Short of letting this
database run and turning on the appropriate logging, is there any way to
track this down to specific queries and get better insight? It could
take several weeks to see a pronounced slowdown.

Any other experience with a similar phenomenon? Any info which corrects
my ignorance or at least helps formulate a usable bug report would be
appreciated.

Damon



Re: special table queries slow until dump/restore

From
Tom Lane
Date:
Damon Hart <dhcom@sundial.com> writes:
> I am experiencing continually degrading performance on queries run
> against the special system tables. I notice the slowdown when these
> meta-data queries are run implicitly "behind the scenes" such as when
> psql with readline support tries to complete a table name or pg_admin
> retrieves table column info to populate the GUI headers. Performance
> picked up dramatically when I had to dump/restore this weekend.

> ... several update
> processes are constantly contributing new data by way of creating and
> dropping temporary tables

It sounds to me like you aren't keeping up housekeeping adequately.
You need to be sure that the system catalogs are vacuumed often enough
and that your FSM settings are large enough to keep track of all the
free space.  These are not different considerations from avoiding bloat
in user tables and indexes --- see the archives for plenty of discussion.

            regards, tom lane