Re: error_severity of brin work item - Mailing list pgsql-hackers

From Justin Pryzby
Subject Re: error_severity of brin work item
Date
Msg-id 20201125172356.GF24052@telsasoft.com
Whole thread Raw
In response to Re: error_severity of brin work item  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
Responses Re: error_severity of brin work item  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
List pgsql-hackers
On Mon, Nov 23, 2020 at 04:39:57PM -0300, Alvaro Herrera wrote:
> I think this formulation (attached v3) has fewer moving parts.
> 
> However, now that I did that, I wonder if this is really the best
> approach to solve this problem.  Maybe instead of doing this at the BRIN
> level, it should be handled at the autovac level, by having the worker
> copy the work-item to local memory and remove it from the shared list as
> soon as it is in progress.  That way, if *any* error occurs while trying
> to execute it, it will go away instead of being retried for all
> eternity.
> 
> Preliminary patch for that attached as autovacuum-workitem.patch.
> 
> I would propose to clean that up to apply instead of your proposed fix.

I don't know why you said it would be retried for eternity ?
I think the perform_work_item() TRY/CATCH avoids that.

Also .. I think your idea doesn't solve my issue, that REINDEX CONCURRENTLY is
causing vacuum to leave errors in my logs.

I check that the first patch avoids the issue and the 2nd one does not.

-- 
Justin



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Prevent printing "next step instructions" in initdb and pg_upgrade
Next
From: Dean Rasheed
Date:
Subject: Re: proposal: possibility to read dumped table's name from file