Re: Changing the state of data checksums in a running cluster - Mailing list pgsql-hackers

From Daniel Gustafsson
Subject Re: Changing the state of data checksums in a running cluster
Date
Msg-id 1EF07DE7-ED45-4FF1-9571-424BAD15151F@yesql.se
Whole thread
In response to Re: Changing the state of data checksums in a running cluster  (Ayush Tiwari <ayushtiwari.slg01@gmail.com>)
Responses Re: Changing the state of data checksums in a running cluster
Re: Changing the state of data checksums in a running cluster
List pgsql-hackers
> On 5 May 2026, at 17:21, Ayush Tiwari <ayushtiwari.slg01@gmail.com> wrote:

> I've a small concern in 0001.  The new guard uses only RelationNeedsWAL(reln),
> but ProcessSingleRelationByOid() iterates all forks.  For unlogged relations,
> the init fork is special, there are several existing call sites that preserve
> WAL for INIT_FORKNUM, for example using
>
>   RelationNeedsWAL(rel) || forknum == INIT_FORKNUM
>
> and catalog/storage.c notes that unlogged init forks need WAL and sync.
>
> So I think the condition in ProcessSingleRelationFork() should preserve the
> init-fork case, e.g.
>
>   if (RelationNeedsWAL(reln) || forkNum == INIT_FORKNUM)
>       log_newpage_buffer(buf, false);

Which failure scenario are you thinking about here?  When dealing with the
catalog relation I can see the need but here we are reading, and writing, data
pages.  In which case would we need to issue an FPI for an unlogged relation
init fork? I might be missing something obvious here.

> 0002 and 0003 look good to me.

Thanks for looking!

--
Daniel Gustafsson




pgsql-hackers by date:

Previous
From: Alexander Lakhin
Date:
Subject: Re: Improving tracking/processing of buildfarm test failures
Next
From: Ayush Tiwari
Date:
Subject: Re: Changing the state of data checksums in a running cluster