On 5/4/21, 7:07 AM, "Robert Haas" <robertmhaas@gmail.com> wrote:
> On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres@anarazel.de> wrote:
>> On 2021-05-03 16:49:16 -0400, Robert Haas wrote:
>> > I have two possible ideas for addressing this; perhaps other people
>> > will have further suggestions. A relatively non-invasive fix would be
>> > to teach pgarch.c how to increment a WAL file name. After archiving
>> > segment N, check using stat() whether there's an .ready file for
>> > segment N+1. If so, do that one next. If not, then fall back to
>> > performing a full directory scan.
>>
>> Hm. I wonder if it'd not be better to determine multiple files to be
>> archived in one readdir() pass?
>
> I think both methods have some merit. If we had a way to pass a range
> of files to archive_command instead of just one, then your way is
> distinctly better, and perhaps we should just go ahead and invent such
> a thing. If not, your way doesn't entirely solve the O(n^2) problem,
> since you have to choose some upper bound on the number of file names
> you're willing to buffer in memory, but it may lower it enough that it
> makes no practical difference. I am somewhat inclined to think that it
> would be good to start with the method I'm proposing, since it is a
> clear-cut improvement over what we have today and can be done with a
> relatively limited amount of code change and no redesign, and then
> perhaps do something more ambitious afterward.
I was curious about this, so I wrote a patch (attached) to store
multiple files per directory scan and tested it against the latest
patch in this thread (v9) [0]. Specifically, I set archive_command to
'false', created ~20K WAL segments, then restarted the server with
archive_command set to 'true'. Both the v9 patch and the attached
patch completed archiving all segments in just under a minute. (I
tested the attached patch with NUM_FILES_PER_DIRECTORY_SCAN set to 64,
128, and 256 and didn't observe any significant difference.) The
existing logic took over 4 minutes to complete.
I'm hoping to do this test again with many more (100K+) status files,
as I believe that the v9 patch will be faster at that scale, but I'm
not sure how much faster it will be.
Nathan
[0] https://www.postgresql.org/message-id/attachment/125543/v9-0001-mitigate-directory-scan-for-WAL-archiver.patch