Re: remove more archiving overhead - Mailing list pgsql-hackers

From Robert Haas
Subject Re: remove more archiving overhead
Date
Msg-id CA+TgmoYEkyU_D_ZihWRNWD9qC0sJ5y8gQtPc2YzVNvEzBAtRSA@mail.gmail.com
Whole thread Raw
In response to Re: remove more archiving overhead  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: remove more archiving overhead  (Nathan Bossart <nathandbossart@gmail.com>)
List pgsql-hackers
On Thu, Apr 7, 2022 at 6:23 PM Nathan Bossart <nathandbossart@gmail.com> wrote:
> On Thu, Feb 24, 2022 at 09:55:53AM -0800, Nathan Bossart wrote:
> > Yes.  I found that a crash at an unfortunate moment can produce multiple
> > links to the same file in pg_wal, which seemed bad independent of archival.
> > By fixing that (i.e., switching from durable_rename_excl() to
> > durable_rename()), we not only avoid this problem, but we also avoid trying
> > to archive a file the server is concurrently writing.  Then, after a crash,
> > the WAL file to archive should either not exist (which is handled by the
> > archiver) or contain the same contents as any preexisting archives.
>
> I moved the fix for this to a new thread [0] since I think it should be
> back-patched.  I've attached a new patch that only contains the part
> related to reducing archiving overhead.

While we're now after the feature freeze and thus this will need to
wait for v16, it looks like a reasonable change to me.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: "S.R Keshav"
Date:
Subject: GSOC: New and improved website for pgjdbc (JDBC) (2022)
Next
From: Frédéric Yhuel
Date:
Subject: Re: REINDEX blocks virtually any queries but some prepared queries.