Re: Proposal: Log inability to lock pages during vacuum - Mailing list pgsql-hackers

From Jim Nasby
Subject Re: Proposal: Log inability to lock pages during vacuum
Date
Msg-id 5446E577.3060801@BlueTreble.com
Whole thread Raw
In response to Re: Proposal: Log inability to lock pages during vacuum  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Responses Re: Proposal: Log inability to lock pages during vacuum
List pgsql-hackers
On 10/21/14, 5:39 PM, Alvaro Herrera wrote:
> Jim Nasby wrote:
>
>> Currently, a non-freeze vacuum will punt on any page it can't get a
>> cleanup lock on, with no retry. Presumably this should be a rare
>> occurrence, but I think it's bad that we just assume that and won't
>> warn the user if something bad is going on.
>
> I think if you really want to attack this problem, rather than just
> being noisy about it, what you could do is to keep a record of which
> page numbers you had to skip, and then once you're done with your first
> scan you go back and retry the lock on the pages you skipped.

I'm OK with that if the community is; I was just trying for minimum invasiveness.

If I go this route, I'd like some input though...

- How to handle storing the blockIDs. Fixed size array or something fancier? What should we limit it to, especially
sincewe're already allocating maintenance_work_mem for the tid array.
 

- What happens if we run out of space to remember skipped blocks? I could do something like what we do for running out
ofspace in the dead_tuples array, but I'm worried that will add a serious amount of complexity, especially since
re-processingthese blocks could be what actually pushes us over the limit.
 
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Spurious set in heap_prune_chain()
Next
From: Tom Lane
Date:
Subject: Re: Proposal: Log inability to lock pages during vacuum