Re: remove more archiving overhead - Mailing list pgsql-hackers

From Fujii Masao
Subject Re: remove more archiving overhead
Date
Msg-id 768e6cb3-256d-9c0b-1797-62420ffca7ae@oss.nttdata.com
Whole thread Raw
In response to Re: remove more archiving overhead  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: remove more archiving overhead
List pgsql-hackers

On 2022/04/08 7:23, Nathan Bossart wrote:
> On Thu, Feb 24, 2022 at 09:55:53AM -0800, Nathan Bossart wrote:
>> Yes.  I found that a crash at an unfortunate moment can produce multiple
>> links to the same file in pg_wal, which seemed bad independent of archival.
>> By fixing that (i.e., switching from durable_rename_excl() to
>> durable_rename()), we not only avoid this problem, but we also avoid trying
>> to archive a file the server is concurrently writing.  Then, after a crash,
>> the WAL file to archive should either not exist (which is handled by the
>> archiver) or contain the same contents as any preexisting archives.
> 
> I moved the fix for this to a new thread [0] since I think it should be
> back-patched.  I've attached a new patch that only contains the part
> related to reducing archiving overhead.

Thanks for updating the patch. It looks good to me.
Barring any objection, I'm thinking to commit it.

Regards,

-- 
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION



pgsql-hackers by date:

Previous
From: Andrey Lepikhov
Date:
Subject: Re: Fast COPY FROM based on batch insert
Next
From: Robert Haas
Date:
Subject: Re: remove more archiving overhead