A database with a very large number of tables eligible for autovacuum can result in autovacuum workers “stuck” in a tight loop of table_recheck_autovac() constantly reporting nothing to do on the table. This is because a database with a very large number of tables means it takes a while to search the statistics hash to verify that the table still needs to be processed[1]. If a worker spends some time processing a table, when it’s done it can spend a significant amount of time rechecking each table that it identified at launch (I’ve seen a worker in this state for over an hour). A simple work-around in this scenario is to kill the worker; the launcher will quickly fire up a new worker on the same database, and that worker will build a new list of tables.
That’s not a complete solution though… if the database contains a large number of very small tables you can end up in a state where 1 or 2 workers is busy chugging through those small tables so quickly than any additional workers spend all their time in table_recheck_autovac(), because that takes long enough that the additional workers are never able to “leapfrog” the workers that are doing useful work.
PoC patch attached.
1: top hits from `perf top -p xxx` on an affected worker
Samples: 72K of event 'cycles', Event count (approx.): 17131910436
Overhead Shared Object Symbol
42.62% postgres [.] hash_search_with_hash_value
10.34% libc-2.17.so [.] __memcpy_sse2
6.99% [kernel] [k] copy_user_enhanced_fast_string
4.73% libc-2.17.so [.] _IO_fread
3.91% postgres [.] 0x00000000002d6478
2.95% libc-2.17.so [.] _IO_getc
2.44% libc-2.17.so [.] _IO_file_xsgetn
1.73% postgres [.] hash_search
1.65% [kernel] [k] find_get_entry
1.10% postgres [.] hash_uint32
0.99% libc-2.17.so [.] __memcpy_ssse3_back