Re: autovacuum big table taking hours and sometimes seconds - Mailing list pgsql-performance

From Laurenz Albe
Subject Re: autovacuum big table taking hours and sometimes seconds
Date
Msg-id 53d097b669514cd2628678149a87f85b7ef3db5f.camel@cybertec.at
Whole thread Raw
In response to Re: autovacuum big table taking hours and sometimes seconds  (Mariel Cherkassky <mariel.cherkassky@gmail.com>)
List pgsql-performance
Mariel Cherkassky wrote:
> Lets focus for example on one of the outputs :
> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum of table "myDB.pg_toast.pg_toast_1958391":
indexscans: 8
 
> postgresql-Fri.log-    pages: 2253 removed, 13737828 remain
> postgresql-Fri.log-    tuples: 21759258 removed, 27324090 remain
> postgresql-Fri.log-    buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtied
> postgresql-Fri.log-    avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s
> 
> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms. 
> The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17H
> So autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the
toastedtable. What do you think ? Am I wrong here ?
 

Increasing cost_limit or reducing cost_delay improves the situation.

cost_delay = 0 makes autovacuum as fast as possible.

Yours,
Laurenz Albe



pgsql-performance by date:

Previous
From: Mariel Cherkassky
Date:
Subject: Re: autovacuum big table taking hours and sometimes seconds
Next
From: Alvaro Herrera
Date:
Subject: Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"