Re: Need to run CLUSTER to keep performance - Mailing list pgsql-performance

From Rafael Martinez
Subject Re: Need to run CLUSTER to keep performance
Date
Msg-id 47332200.9050705@usit.uio.no
Whole thread Raw
In response to Re: Need to run CLUSTER to keep performance  (Heikki Linnakangas <heikki@enterprisedb.com>)
Responses Re: Need to run CLUSTER to keep performance  (Heikki Linnakangas <heikki@enterprisedb.com>)
Re: Need to run CLUSTER to keep performance  (Bill Moran <wmoran@collaborativefusion.com>)
Re: Need to run CLUSTER to keep performance  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Heikki Linnakangas wrote:
> Rafael Martinez wrote:

>> The tables with this 'problem' are not big, so CLUSTER finnish very fast
>> and it does not have an impact in the access because of locking. But we
>> wonder why this happens.
>
> 2 seconds for seq scanning 12 MB worth of data sounds like a lot. Have
> you increased shared_buffers from the default? Which operating system
> are you using? Shared memory access is known to be slower on Windows.
>

This is a server with 8GB of ram, we are using 25% as shared_buffers.
Linux RHELAS4 with a 2.6.9-55.0.9.ELsmp kernel / x86_64.

> On a small table like that you could run VACUUM every few minutes
> without much impact on performance. That should keep the table size in
> check.
>

Ok, we run VACUUM ANALYZE only one time a day, every night. But we would
espect the performance to get ok again after running vacuum, and it
doesn't. Only CLUSTER helps.

I can not see we need to change the max_fsm_pages parameter and pg_class
and analyze give us this information today (not long ago a CLUSTER was
executed):
------------------------------------------------------------------------------
scanorama=# VACUUM VERBOSE ANALYZE hosts;
INFO:  vacuuming "public.hosts"
INFO:  index "hosts_pkey" now contains 20230 row versions in 117 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "hosts": found 0 removable, 20230 nonremovable row versions in
651 pages
DETAIL:  3790 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  vacuuming "pg_toast.pg_toast_376127"
INFO:  index "pg_toast_376127_index" now contains 131 row versions in 2
pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_376127": found 0 removable, 131 nonremovable row
versions in 33 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.hosts"
INFO:  "hosts": scanned 651 of 651 pages, containing 16440 live rows and
3790 dead rows; 16440 rows in sample, 16440 estimated total rows
VACUUM

scanorama=# SELECT relname, relpages, reltuples from pg_class WHERE
relname LIKE 'hosts';
 relname | relpages | reltuples
---------+----------+-----------
 hosts   |      651 |     20230
------------------------------------------------------------------------------


Anymore ideas?
regards,
--
 Rafael Martinez, <r.m.guerrero@usit.uio.no>
 Center for Information Technology Services
 University of Oslo, Norway

 PGP Public Key: http://folk.uio.no/rafael/

pgsql-performance by date:

Previous
From: Tomáš Vondra
Date:
Subject: Re: Need to run CLUSTER to keep performance
Next
From: Alvaro Herrera
Date:
Subject: Re: Need to run CLUSTER to keep performance