Re: autovac issue with large number of tables - Mailing list pgsql-hackers

From Jim Nasby
Subject Re: autovac issue with large number of tables
Date
Msg-id c10bda63-4de8-885c-3271-75dd07be931b@amazon.com
Whole thread Raw
In response to Re: autovac issue with large number of tables  (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>)
Responses Re: autovac issue with large number of tables
List pgsql-hackers
On 7/27/20 1:51 AM, Masahiko Sawada wrote:

> On Mon, 27 Jul 2020 at 06:43, Nasby, Jim <nasbyj@amazon.com> wrote:
>> A database with a very large number of  tables eligible for autovacuum can result in autovacuum workers “stuck” in a
tightloop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a
verylarge number of tables means it takes a while to search the statistics hash to verify that the table still needs to
beprocessed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of
timerechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple
work-aroundin this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database,
andthat worker will build a new list of tables.
 
>>
>>
>>
>> That’s not a complete solution though… if the database contains a large number of very small tables you can end up
ina state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend
alltheir time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to
“leapfrog”the workers that are doing useful work.
 
>>
> As another solution, I've been considering adding a queue having table
> OIDs that need to vacuumed/analyzed on the shared memory (i.g. on
> DSA). Since all autovacuum workers running on the same database can
> see a consistent queue, the issue explained above won't happen and
> probably it makes the implementation of prioritization of tables being
> vacuumed easier which is sometimes discussed on pgsql-hackers. I guess
> it might be worth to discuss including this idea.
I'm in favor of trying to improve scheduling (especially allowing users 
to control how things are scheduled), but that's a far more invasive 
patch. I'd like to get something like this patch in without waiting on a 
significantly larger effort.



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: [UNVERIFIED SENDER] FW: autovac issue with large number of tables
Next
From: Alvaro Herrera
Date:
Subject: Re: Default setting for enable_hashagg_disk