Thread: truncate/create slowness

truncate/create slowness

From
Joe Maldonado
Date:
Hello all,
I frequently find that TRUNCATE table and CREATE or REPLACE FUNCTION
are both very slow taking 50 secs or more to complete. We have to run
both commands every minute, so this makes our application
non-functional. But it is not a slow deterioration over time.

Sometimes they run under a second, and sometimes take over 50secs. As
for other jobs in the system, there is nothing that accesses these
tables/function when this happens. Also, when I do a strace on the
postgres backend that is running either of the commands, I find it
doing a ton of llseek/read as below. I don't find it in S state or
sleeping on a semop (I understand this would happen if the function
was being used by some other client).

So I believe most of this time is spent in I/O. Now, there is no
reason to believe slow I/O is the cause since the I/O on the system
does not vary much from one minute to the other.

pg_autovacuum is enabled. And this behavior does not necessarily
coincide with any vacuums being run simultaneously (although vacuum of
pg_class seems to slow down these commands considerably).

I suspect that pg_class has too many pages (49182 as below) and for
some reason the above commands spend a lot of time updating it.
vacuum/analyze on pg_class has not helped. Also, since the time taken
for these commands is not consistently fast or slow, I'm puzzled.

Any help/feedback will be great.

data
-------
_llseek(9, 81469440, [81469440], SEEK_SET) = 0
read(9, "\335\1\0\0P10s\23\0\0\0\\\0\244\0\0 \1 @\236z\3\214\234"...,
8192) = 8192
_llseek(9, 81461248, [81461248], SEEK_SET) = 0
read(9, "\335\1\0\0l\310\311q\23\0\0\0\\\0\260\0\0 \1
D\236r\3\214"..., 8192) = 8192
_llseek(9, 81453056, [81453056], SEEK_SET) = 0
...

Query took 78 seconds:
CREATE OR  REPLACE FUNCTION a() RETURNS SETOF a_type AS'
DECLARE q TEXT := ''''; rec a_type%ROWTYPE;
 BEGIN q := ''SELECT * FROM a1'';
  FOR rec IN EXECUTE q LOOP
   RETURN NEXT rec;
 END LOOP;
RETURN; end; ' language 'plpgsql';

db=# select reltuples, relpages from pg_class where relname = 'pg_class';
 reltuples | relpages
-----------+----------
      1425 |    49182
(1 row)

db=# vacuum analyze verbose pg_class;
INFO:  vacuuming "pg_catalog.pg_class"
INFO:  index "pg_class_oid_index" now contains 1404 row versions in
14486 pages
DETAIL:  443 index row versions were removed.
14362 index pages have been deleted, 14350 are currently reusable.
CPU 0.04s/0.03u sec elapsed 0.07 sec.
INFO:  index "pg_class_relname_nsp_index" now contains 1404 row
versions in 52396 pages
DETAIL:  443 index row versions were removed.
51453 index pages have been deleted, 20000 are currently reusable.
CPU 0.13s/0.09u sec elapsed 0.23 sec.
INFO:  "pg_class": removed 443 row versions in 37 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_class": found 443 removable, 1404 nonremovable row versions
in 49182 pages
DETAIL:  114 dead row versions cannot be removed yet.
There were 2546542 unused item pointers.
0 pages are entirely empty.
CPU 0.32s/0.28u sec elapsed 0.67 sec.
INFO:  analyzing "pg_catalog.pg_class"
INFO:  "pg_class": 49182 pages, 1290 rows sampled, 1290 estimated total rows
VACUUM

-Joe

Re: truncate/create slowness

From
"Qingqing Zhou"
Date:
"Joe Maldonado" <jmaldonado@webehosting.biz> writes
> I suspect that pg_class has too many pages (49182 as below) and for
> some reason the above commands spend a lot of time updating it.
> vacuum/analyze on pg_class has not helped. Also, since the time taken
> for these commands is not consistently fast or slow, I'm puzzled.
>

Statistics (reltuples, relpages) from pg_class is not always fresh. If you
want to "reduce" the size of table, run vacuum with "full" option. Also, you
may want to examine that the time is not caused by checkpoints or locks.

Regards,
Qingqing



Re: truncate/create slowness

From
Tom Lane
Date:
Joe Maldonado <jmaldonado@webehosting.biz> writes:
> db=# vacuum analyze verbose pg_class;
> INFO:  vacuuming "pg_catalog.pg_class"
> INFO:  index "pg_class_oid_index" now contains 1404 row versions in
> 14486 pages
> DETAIL:  443 index row versions were removed.
> 14362 index pages have been deleted, 14350 are currently reusable.
> CPU 0.04s/0.03u sec elapsed 0.07 sec.
> INFO:  index "pg_class_relname_nsp_index" now contains 1404 row
> versions in 52396 pages
> DETAIL:  443 index row versions were removed.
> 51453 index pages have been deleted, 20000 are currently reusable.
> CPU 0.13s/0.09u sec elapsed 0.23 sec.
> INFO:  "pg_class": removed 443 row versions in 37 pages
> DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "pg_class": found 443 removable, 1404 nonremovable row versions
> in 49182 pages
> DETAIL:  114 dead row versions cannot be removed yet.
> There were 2546542 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.32s/0.28u sec elapsed 0.67 sec.
> INFO:  analyzing "pg_catalog.pg_class"
> INFO:  "pg_class": 49182 pages, 1290 rows sampled, 1290 estimated total rows
> VACUUM

My goodness :-( you have got a *serious* catalog bloat problem there.
With that many rows, pg_class should be on the order of 50 pages,
not 50K.  The indexes are also roughly a thousand times larger than
they should be.  No wonder searches are slow.  I wonder if any of the
other system catalogs are as bad?  (pg_attribute could be as bad or
worse, if the bloat came from lots of temp table creations.)

It's possible you could get out of this by vacuum full and then reindex
each catalog, but it might be easier to dump and reload the database ...

> pg_autovacuum is enabled.

Obviously autovacuum has fallen down badly for you.  What version are
you running exactly, and what are the autovac parameters set to?  Also,
do you have FSM set high enough to cover your database?

            regards, tom lane

Re: truncate/create slowness

From
Patrick.FICHE@AQSACOM.COM
Date:
I'm not an expert in PostgreSQL but it just reminds me some problems I was
confronted to when creating temporary tables in functions...
Some internal tables like pg_class and pg_attribute were growing and VACUUM
was not able to reduce the size of these tables...
Not sure it's the same case but hope it will help...

----------------------------------------------------------------------------
---------------
Patrick Fiche
email : patrick.fiche@aqsacom.com
tél : 01 69 29 36 18
----------------------------------------------------------------------------
---------------




-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Tom Lane
Sent: jeudi 31 mars 2005 17:17
To: Joe Maldonado
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] truncate/create slowness


Joe Maldonado <jmaldonado@webehosting.biz> writes:
> db=# vacuum analyze verbose pg_class;
> INFO:  vacuuming "pg_catalog.pg_class"
> INFO:  index "pg_class_oid_index" now contains 1404 row versions in
> 14486 pages
> DETAIL:  443 index row versions were removed.
> 14362 index pages have been deleted, 14350 are currently reusable.
> CPU 0.04s/0.03u sec elapsed 0.07 sec.
> INFO:  index "pg_class_relname_nsp_index" now contains 1404 row
> versions in 52396 pages
> DETAIL:  443 index row versions were removed.
> 51453 index pages have been deleted, 20000 are currently reusable.
> CPU 0.13s/0.09u sec elapsed 0.23 sec.
> INFO:  "pg_class": removed 443 row versions in 37 pages
> DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "pg_class": found 443 removable, 1404 nonremovable row versions
> in 49182 pages
> DETAIL:  114 dead row versions cannot be removed yet.
> There were 2546542 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.32s/0.28u sec elapsed 0.67 sec.
> INFO:  analyzing "pg_catalog.pg_class"
> INFO:  "pg_class": 49182 pages, 1290 rows sampled, 1290 estimated total
rows
> VACUUM

My goodness :-( you have got a *serious* catalog bloat problem there.
With that many rows, pg_class should be on the order of 50 pages,
not 50K.  The indexes are also roughly a thousand times larger than
they should be.  No wonder searches are slow.  I wonder if any of the
other system catalogs are as bad?  (pg_attribute could be as bad or
worse, if the bloat came from lots of temp table creations.)

It's possible you could get out of this by vacuum full and then reindex
each catalog, but it might be easier to dump and reload the database ...

> pg_autovacuum is enabled.

Obviously autovacuum has fallen down badly for you.  What version are
you running exactly, and what are the autovac parameters set to?  Also,
do you have FSM set high enough to cover your database?

            regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Re: truncate/create slowness

From
"Julian Scarfe"
Date:
> It's possible you could get out of this by vacuum full and then reindex
> each catalog, but it might be easier to dump and reload the database ...

I've got a similar issue, but caused by neglect rather than anything to to
with pg_autovacuum.

Do you have any rules of thumb for deciding when a pg_dumpall/restore is
likely to be faster than a vacuum full?  Or perhaps more straightforwardly,
how would you expect the time required for a vacuum full to scale with pages
used and rows in the table?

Thanks

Julian Scarfe



Re: truncate/create slowness

From
"Robin M."
Date:
unsubscribe pgsql-general

Re: truncate/create slowness

From
Tom Lane
Date:
"Julian Scarfe" <julian@avbrief.com> writes:
> Do you have any rules of thumb for deciding when a pg_dumpall/restore is
> likely to be faster than a vacuum full?  Or perhaps more straightforwardly,
> how would you expect the time required for a vacuum full to scale with pages
> used and rows in the table?

There is a factor that's proportional to the number of tuples deleted,
and a bigger factor that's proportional to the number of tuples moved
while trying to compact the table.  If you've got a seriously bloated
table then it's fairly likely that *all* the surviving tuples will get
moved because none of them are near the start of the table already :-(

Having said that, though, a vacuum full and reindex on pg_class and
pg_attribute will certainly solve Steve's problem faster than a dump
and reload, simply because there's not much stuff in those catalogs
compared to any reasonably-sized user tables.

            regards, tom lane