pg_autovacuum not having enough suction ? - Mailing list pgsql-performance

From Otto Blomqvist
Subject pg_autovacuum not having enough suction ?
Date
Msg-id d1v0a7$iau$1@news.hub.org
Whole thread Raw
Responses Re: pg_autovacuum not having enough suction ?
Re: pg_autovacuum not having enough suction ?
List pgsql-performance
Hello !

I'm running pg_autovacuum on a 1GHz, 80Gig, 512Mhz machine. The database is
about 30MB tarred. We have about 50000 Updates/Inserts/Deletes per day. It
runs beautifully for ~4 days. Then the HDD activity and the Postmaster CPU
usage goes up ALOT. Even though I have plenty (?) of FSM (2 million) pages.
I perform a vacuum and everything is back to normal for another 4 days. I
could schedule a manual vacuum each day but the util is not called
pg_SemiAutoVacuum so I'm hoping this is not necessary. The same user that
ran the manual vacuum is running pg_autovacuum. The normal CPU usage is
about 10% w/ little HD activity.

Im running autovacuum with the following flags -d 3 -v 300 -V 0.1 -s 180 -S
0.1 -a 200 -A 0.1

Below are some snipplets regarding vacuuming from the busiest table

This is the last VACUUM ANALYZE performed by pg_autovacuum before I ran the
manual vacuum

[2005-03-24 02:05:43 EST] DEBUG:        Performing: VACUUM ANALYZE
"public"."file_92"
[2005-03-24 02:05:52 EST] INFO:         table name: secom."public"."file_92"
[2005-03-24 02:05:52 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-24 02:05:52 EST] INFO:         reltuples: 106228.000000;  relpages:
9131
[2005-03-24 02:05:52 EST] INFO:         curr_analyze_count: 629121;
curr_vacuum_count: 471336
[2005-03-24 02:05:52 EST] INFO:         last_analyze_count: 629121;
last_vacuum_count: 471336
[2005-03-24 02:05:52 EST] INFO:         analyze_threshold: 10822;
vacuum_threshold: 10922

This is the last pg_autovacuum debug output before I ran the manual vacuum

[2005-03-24 09:18:44 EST] INFO:         table name: secom."public"."file_92"
[2005-03-24 09:18:44 EST] INFO:         relid: 9384219;   relisshared: 0
[2005-03-24 09:18:44 EST] INFO:         reltuples: 106228.000000;  relpages:
9131
[2005-03-24 09:18:44 EST] INFO:         curr_analyze_count: 634119;
curr_vacuum_count: 476095
[2005-03-24 09:18:44 EST] INFO:         last_analyze_count: 629121;
last_vacuum_count: 471336
[2005-03-24 09:18:44 EST] INFO:         analyze_threshold: 10822;
vacuum_threshold: 10922

file_92 had about 10000 Inserts/Deletes between 02:05  and 9:20

Then i Ran a vacuum verbose

23 Mar 05 - 9:20 AM
INFO:  vacuuming "public.file_92"
INFO:  index "file_92_record_number_key" now contains 94 row versions in
2720 pages
DETAIL:  107860 index row versions were removed.
2712 index pages have been deleted, 2060 are currently reusable.
CPU 0.22s/0.64u sec elapsed 8.45 sec.
INFO:  "file_92": removed 107860 row versions in 9131 pages
DETAIL:  CPU 1.13s/4.27u sec elapsed 11.75 sec.
INFO:  "file_92": found 107860 removable, 92 nonremovable row versions in
9131 pages
DETAIL:  91 dead row versions cannot be removed yet.
There were 303086 unused item pointers.
0 pages are entirely empty.
CPU 1.55s/5.00u sec elapsed 20.86 sec.
INFO:  "file_92": truncated 9131 to 8423 pages
DETAIL:  CPU 0.65s/0.03u sec elapsed 5.80 sec.
INFO:  free space map: 57 relations, 34892 pages stored; 34464 total pages
needed
DETAIL:  Allocated FSM size: 1000 relations + 2000000 pages = 11784 kB
shared memory.

Also, file_92 is just a temporary storage area, for records waiting to be
processed. Records are in there typically ~10 sec.

Over 100'000 Index Rows removed, 300'000 unused item pointers ? How could
autovacuum let this happen ? I would estimate the table had about 10000
inserts/deletes between the last pg_autovacuum "Vacuum analyze" and my
manual vacuum verbose.

It is like the suction is not strong enough ;)

Any ideas ? It would be greatly appreciated as this is taking me one step
closer to the looney bin.

Thanks

/Otto Blomqvist



pgsql-performance by date:

Previous
From: Stephan Szabo
Date:
Subject: Re: clear function cache (WAS: SQL function inlining)
Next
From: "Matthew Nuzum"
Date:
Subject: Preventing query from hogging server