Re: \d on database with a lot of tables is slow - Mailing list pgsql-hackers

From Jon Jensen
Subject Re: \d on database with a lot of tables is slow
Date
Msg-id Pine.LNX.4.63.0509301751550.5391@ynfu.ovalna.fjrygre.arg
Whole thread Raw
In response to Re: \d on database with a lot of tables is slow  ("Jim C. Nasby" <jnasby@pervasive.com>)
List pgsql-hackers
On Fri, 30 Sep 2005, Jim C. Nasby wrote:

> ERROR:  cache lookup failed for relation 1906465919
>
> It is on an exact table name.  When we retry the describe on a failure,
> sometimes it works and sometimes it fails again.  When it fails again
> the relation number is different. Also, \d schema.table always returns
> quickly and never errors. \d table is slow and produces the error fairly
> often.
>
> They're using 8.0.x; I'm pretty certain it's 8.0.3.

We have the exact same problem on a 7.4.5 database. Some basic info on the 
database: psql's \d returns 424 rows, and the on-disk size of the database 
is about 11 GB. A standalone \dt throws the same occasional error.

I've suspected that it may be caused by ongoing periodic (at least once an 
hour) rebuilding of certain generated tables with TRUNCATE and then INSERT 
INTO the_table SELECT ... inside a transaction. But I don't have any proof 
of that; it's just the most obvious different thing going on compared to 
other databases we have.

It does seem like I've encountered the error less often since increasing 
the max_fsm_pages setting and thus had more effective VACUUM and less 
pg_class bloat, but OTOH I trained myself not to do \d there very often 
either, since it was so slow and failed so often, so that may be 
coincidence. :)

Jon

--
Jon Jensen
End Point Corporation
http://www.endpoint.com/
Software development with Interchange, Perl, PostgreSQL, Apache, Linux, ...


pgsql-hackers by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: Open items list for 8.1
Next
From: Bruce Momjian
Date:
Subject: Re: Found small issue with OUT params