RE: Does the work made by vaccum in the current pass is lost when interrupted? - Mailing list pgsql-general

From Jean-Marc Lessard
Subject RE: Does the work made by vaccum in the current pass is lost when interrupted?
Date
Msg-id QB1PR01MB36334C6FE1F518EDE242867E94030@QB1PR01MB3633.CANPRD01.PROD.OUTLOOK.COM
Whole thread Raw
In response to Re: Does the work made by vaccum in the current pass is lost when interrupted?  (Michael Lewis <mlewis@entrata.com>)
List pgsql-general

Thank you Michael for your suggestion,

I will reduce vacuum_cost_delay and increasing vacuum_cost_limit to speedup vacuum.

But the I/O system is very solicitated on PROD system and I do not want to impact end user performance.

If aggressive cost_delay and vacuum_cost_limit slow down significatively end user performance, I may have to kill the vacuum, but would like to keep as much vacuum work as possible.

 

My concern is about the size of the max_dead_tuples buffer determined by the autovacuum_work_mem

  • autovacuum_work_mem is currently 1GB (default to maintenance_work_mem)

 

When I ran a vacuum on a DEV system and it made a single “scanning heap/vacuuming indexes/vacuuming heap” cycle of about 12 hours.

The vacuuming indexes phase is half the total time (6 hrs).

 

Q1: I guess that if I have to kill the vacuum midway, I will lose all the work it did during the scanning heap step? Right?

Q2: If I reduce autovacuum_work_mem to 32MB, is there any side effect performance on the vacuum?

 

thanks

pgsql-general by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: Date Format 9999-12-31-00.00.00.000000
Next
From: Stephen Frost
Date:
Subject: Re: UUID with variable length