Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock - Mailing list pgsql-hackers

From Robert Haas
Subject Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock
Date
Msg-id CA+Tgmob9=XBiNtg6AQv7W_LLFZGo-GgW1zDpVXV8gRDtDVWWGA@mail.gmail.com
Whole thread Raw
In response to Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock  (Andres Freund <andres@anarazel.de>)
Responses Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Mon, Oct 17, 2022 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:
> On 2022-10-17 13:34:02 -0400, Robert Haas wrote:
> > I don't feel quite as confident that not attempting a cleanup lock on
> > the new bucket's primary page is OK. I think it should be fine. The
> > existing comment even says it should be fine. But, that comment could
> > be wrong, and I'm not sure that I have my head around what all of the
> > possible interactions around that cleanup lock are. So changing it
> > makes me a little nervous.
>
> If it's not OK, then the acquire-cleanuplock-after-reinit would be an
> active bug though, right?

Yes, probably so.

Another approach here would be to have something like _hash_getnewbuf
that does not use RBM_ZERO_AND_LOCK or call _hash_pageinit, and then
call _hash_pageinit here, perhaps just before nopaque =
HashPageGetOpaque(npage), so that it's within the critical section.
But that doesn't feel very consistent with the rest of the code.

Maybe just nuking the IsBufferCleanupOK call is best, I don't know. I
honestly doubt that it matters very much what we pick here.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: havingQual vs hasHavingQual buglets
Next
From: Önder Kalacı
Date:
Subject: Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is full on the publisher