Re: .ready and .done files considered harmful - Mailing list pgsql-hackers

From Bossart, Nathan
Subject Re: .ready and .done files considered harmful
Date
Msg-id 08C406B6-1672-475B-AE03-1ABD6C9F7984@amazon.com
Whole thread Raw
In response to Re: .ready and .done files considered harmful  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: .ready and .done files considered harmful
List pgsql-hackers

On 8/23/21, 6:42 AM, "Robert Haas" <robertmhaas@gmail.com> wrote:
> On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan <bossartn@amazon.com> wrote:
>> I ran this again on a bigger machine with 200K WAL files pending
>> archive.  The v9 patch took ~5.5 minutes, the patch I sent took ~8
>> minutes, and the existing logic took just under 3 hours.
>
> Hmm. On the one hand, 8 minutes > 5.5 minutes, and presumably the gap
> would only get wider if the number of files were larger or if reading
> the directory were slower. I am pretty sure that reading the directory
> must be much slower in some real deployments where this problem has
> come up. On the other hand, 8.8 minutes << 3 hours, and your patch
> would win if somehow we had a ton of gaps in the sequence of files.
> I'm not sure how likely that is to be the cause - probably not very
> likely at all if you aren't using an archive command that cheats, but
> maybe really common if you are. Hmm, but I think if the
> archive_command cheats by marking a bunch of files done when it is
> tasked with archiving just one, your patch will break, because, unless
> I'm missing something, it doesn't re-evaluate whether things have
> changed on every pass through the loop as Dipesh's patch does. So I
> guess I'm not quite sure I understand why you think this might be the
> way to go?

To handle a "cheating" archive command, I'd probably need to add a
stat() for every time pgarch_readyXLog() returned something from
arch_files.  I suspect something similar might be needed in Dipesh's
patch to handle backup history files and partial WAL files.

In any case, I think Dipesh's patch is the way to go.  It obviously
will perform better in the extreme cases discussed in this thread.  I
think it's important to make sure the patch doesn't potentially leave
files behind to be picked up by a directory scan that might not
happen, but there are likely ways to handle that.  In the worst case,
perhaps we need to force a directory scan every N files to make sure
nothing gets left behind.  But maybe we can do better.

> Maintaining the binary heap in lowest-priority-first order is very
> clever, and the patch does look quite elegant. I'm just not sure I
> understand the point.

This was mostly an exploratory exercise to get some numbers for the
different approaches discussed in this thread.

Nathan


pgsql-hackers by date:

Previous
From: "alvherre@alvh.no-ip.org"
Date:
Subject: Re: archive status ".ready" files may be created too early
Next
From: Chapman Flack
Date:
Subject: Re: Mark all GUC variable as PGDLLIMPORT