Re: Autovacuum in the backend - Mailing list pgsql-hackers

From Hans-Jürgen Schönig
Subject Re: Autovacuum in the backend
Date
Msg-id 42B14122.1050906@cybertec.at
Whole thread Raw
In response to Re: Autovacuum in the backend  (Gavin Sherry <swm@linuxworld.com.au>)
Responses Re: Autovacuum in the backend
Re: Autovacuum in the backend
List pgsql-hackers
> 2) By no fault of its own, autovacuum's level of granularity is the table
> level. For people dealing with non-trivial amounts of data (and we're not
> talking gigabytes or terabytes here), this is a serious drawback. Vacuum
> at peak times can cause very intense IO bursts -- even with the
> enhancements in 8.0. I don't think the solution to the problem is to give
> users the impression that it is solved and then vacuum their tables during
> peak periods. I cannot stress this enough.


I completly agree with Gavin - integrating this kind of thing into the 
backend writer or integrate it with FSM would be the ideal solution.

I guess everybody who has already vacuumed a 2 TB relation will agree 
here. VACUUM is not a problem for small "my cat Minka" databases. 
However, it has been a real problem on large, heavy-load databases. I 
have even seen people splitting large tables and join them with a view 
to avoid long vacuums and long CREATE INDEX operations (i am not joking 
- this is serious).

postgresql is more an more used to really large boxes. this is an 
increasing problem. gavin's approach using a vacuum bitmap seems to be a 
good approach. an alternative would be to have some sort of vacuum queue 
containing a set of pages which are reported by the writing process (= 
backend writer or backends).
best regards,
    hans

-- 
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/664/393 39 74
www.cybertec.at, www.postgresql.at



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: PROPOSAL - User's exception in PL/pgSQL
Next
From: "Magnus Hagander"
Date:
Subject: Re: Autovacuum in the backend