Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ? - Mailing list pgsql-general

From Michael Lewis
Subject Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ?
Date
Msg-id CAHOFxGoNCf7tLfv2Or8hnyvho7bie0kqkCHHkVJ3d7dady4MXw@mail.gmail.com
Whole thread Raw
In response to Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ?  (Eduard Rozenberg <edrozenberg@gmail.com>)
List pgsql-general
On Thu, May 14, 2020 at 2:20 PM Eduard Rozenberg <edrozenberg@gmail.com> wrote:
I did verify postgresql.conf has always been properly configured re: autovacuum:  'autovacuum = on'and 'track_counts = on'

This may be insufficient to keep up if you have large tables. The default scale factor allows for 20% of the rows to be dead before the autovacuum will kick in to mark the space as available for reuse. Assuming you have the I/O capacity and prefer to do a little cleanup more often rather than HUGE cleanup work all at once on rare occasions, it may be ideal to look at turning down the autovacuum_vacuum_scale_factor. You can tweak these settings on large tables only, or increase the autovacuum_vacuum_threshold at the same time to compensate a bit for decreasing the scale factor. You can also look at pg_stat_activity for autovacuums, and if you see that some are running for hours, then probably they are trying to do too much work all at once and waiting too long before tidying up. Also, the default autovacuum_vacuum_cost_delay was changed from 20ms to 2ms with PG12 so that may be worth considering as a best practice even on older versions.

pgsql-general by date:

Previous
From: Virendra Kumar
Date:
Subject: Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ?
Next
From: Andy Fan
Date:
Subject: Re: Check what has been done for a uncommitted prepared transaction