Re: truncate/create slowness - Mailing list pgsql-general

From Patrick.FICHE@AQSACOM.COM
Subject Re: truncate/create slowness
Date
Msg-id 1DC6C8C88D09D51181A40002A5286929B22BB2@intranet
Whole thread Raw
In response to truncate/create slowness  (Joe Maldonado <jmaldonado@webehosting.biz>)
List pgsql-general
I'm not an expert in PostgreSQL but it just reminds me some problems I was
confronted to when creating temporary tables in functions...
Some internal tables like pg_class and pg_attribute were growing and VACUUM
was not able to reduce the size of these tables...
Not sure it's the same case but hope it will help...

----------------------------------------------------------------------------
---------------
Patrick Fiche
email : patrick.fiche@aqsacom.com
tél : 01 69 29 36 18
----------------------------------------------------------------------------
---------------




-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Tom Lane
Sent: jeudi 31 mars 2005 17:17
To: Joe Maldonado
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] truncate/create slowness


Joe Maldonado <jmaldonado@webehosting.biz> writes:
> db=# vacuum analyze verbose pg_class;
> INFO:  vacuuming "pg_catalog.pg_class"
> INFO:  index "pg_class_oid_index" now contains 1404 row versions in
> 14486 pages
> DETAIL:  443 index row versions were removed.
> 14362 index pages have been deleted, 14350 are currently reusable.
> CPU 0.04s/0.03u sec elapsed 0.07 sec.
> INFO:  index "pg_class_relname_nsp_index" now contains 1404 row
> versions in 52396 pages
> DETAIL:  443 index row versions were removed.
> 51453 index pages have been deleted, 20000 are currently reusable.
> CPU 0.13s/0.09u sec elapsed 0.23 sec.
> INFO:  "pg_class": removed 443 row versions in 37 pages
> DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "pg_class": found 443 removable, 1404 nonremovable row versions
> in 49182 pages
> DETAIL:  114 dead row versions cannot be removed yet.
> There were 2546542 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.32s/0.28u sec elapsed 0.67 sec.
> INFO:  analyzing "pg_catalog.pg_class"
> INFO:  "pg_class": 49182 pages, 1290 rows sampled, 1290 estimated total
rows
> VACUUM

My goodness :-( you have got a *serious* catalog bloat problem there.
With that many rows, pg_class should be on the order of 50 pages,
not 50K.  The indexes are also roughly a thousand times larger than
they should be.  No wonder searches are slow.  I wonder if any of the
other system catalogs are as bad?  (pg_attribute could be as bad or
worse, if the bloat came from lots of temp table creations.)

It's possible you could get out of this by vacuum full and then reindex
each catalog, but it might be easier to dump and reload the database ...

> pg_autovacuum is enabled.

Obviously autovacuum has fallen down badly for you.  What version are
you running exactly, and what are the autovac parameters set to?  Also,
do you have FSM set high enough to cover your database?

            regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

pgsql-general by date:

Previous
From: Jeff Boes
Date:
Subject: Re: How to identify long-running queries, not just long-running backends?
Next
From: "Joseph M. Day"
Date:
Subject: Temporary Tables