Re: autovacuum_work_mem - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: autovacuum_work_mem
Date
Msg-id CA+U5nMLyJx-ent1hm198QtVwsfDySZOqdK+8vAGZjh1LGj3dZA@mail.gmail.com
Whole thread Raw
In response to autovacuum_work_mem  (Peter Geoghegan <pg@heroku.com>)
Responses Re: autovacuum_work_mem
List pgsql-hackers
On 19 October 2013 19:22, Peter Geoghegan <pg@heroku.com> wrote:

> I won't repeat the rationale for the patch here.

I can't see the problem that this patch is trying to solve. I'm having
trouble understanding when I would use this.

VACUUM uses 6 bytes per dead tuple. And autovacuum regularly removes
dead tuples, limiting their numbers.

In what circumstances will the memory usage from multiple concurrent
VACUUMs become a problem? In those circumstances, reducing
autovacuum_work_mem will cause more passes through indexes, dirtying
more pages and elongating the problem workload.

I agree that multiple concurrent VACUUMs could be a problem but this
doesn't solve that, it just makes things worse.

Freezing doesn't require any memory at all, so wraparound vacuums
won't be controlled by this parameter.

Can we re-state what problem actually is here and discuss how to solve
it. (The reference [2] didn't provide a detailed explanation of the
problem, only the reason why we want a separate parameter).

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Re: Server is not getting started with log level as debug5 on master after commit 3147ac
Next
From: Alexey Vasiliev
Date:
Subject: Re[2]: [HACKERS] Connect from background worker thread to database