Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock - Mailing list pgsql-hackers

From Robert Haas
Subject Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock
Date
Msg-id CA+TgmoYNPLEyZNrkS+wbLC+xHFiCHaQ87A5cVuEYXubs_nS92g@mail.gmail.com
Whole thread Raw
In response to Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock  (Andres Freund <andres@anarazel.de>)
Responses Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock
List pgsql-hackers
On Wed, Aug 10, 2022 at 1:28 AM Andres Freund <andres@anarazel.de> wrote:
> I assume this is trying to defend against some sort of deadlock by not
> actually getting a cleanup lock (by passing get_cleanup_lock = true to
> XLogReadBufferForRedoExtended()).

I had that thought too, but I don't *think* it's the case. This
function acquires a lock on the oldest bucket page, then on the new
bucket page. We could deadlock if someone who holds a pin on the new
bucket page tries to take a content lock on the old bucket page. But
who would do that? The new bucket page isn't yet linked from the
metapage at this point, so no scan should do that. There can be no
concurrent writers during replay. I think that if someone else has the
new page pinned they probably should not be taking content locks on
other buffers at the same time.

So maybe we can just apply something like the attached.

-- 
Robert Haas
EDB: http://www.enterprisedb.com

Attachment

pgsql-hackers by date:

Previous
From: Daniel Gustafsson
Date:
Subject: Re: Propose a new function - list_is_empty
Next
From: Tom Lane
Date:
Subject: Re: Making Vars outer-join aware