On Tue, Nov 16, 2010 at 11:12 AM, Alvaro Herrera
<alvherre@alvh.no-ip.org> wrote:
> Magnus was just talking to me about having a better way of controlling
> memory usage on autovacuum. Instead of each worker using up to
> maintenance_work_mem, which ends up as a disaster when DBA A sets to a
> large value and DBA B raises autovacuum_max_workers, we could simply
> have an "autovacuum_maintenance_memory" setting (name TBD), that defines
> the maximum amount of memory that autovacuum is going to use regardless
> of the number of workers.
>
> So for the initial implementation, we could just have each worker set
> its local maintenance_work_mem to autovacuum_maintenance_memory / max_workers.
> That way there's never excessive memory usage.
>
> This implementation is not ideal, because most of the time they wouldn't
> use that much memory, and so vacuums could be slower. But I think it's
> better than what we currently have.
>
> Thoughts?
I'm a little skeptical about creating more memory tunables. DBAs who
are used to previous versions of PG will find that their vacuum is now
really slow, because they adjusted maintenance_work_mem but not this
new parameter. If we could divide up the vacuum memory intelligently
between the workers in some way, that would be a win. But just
creating a different variable that controls the same thing in
different units doesn't seem to add much.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company