Re: Would it be possible to have parallel archiving? - Mailing list pgsql-hackers

From David Steele
Subject Re: Would it be possible to have parallel archiving?
Date
Msg-id dbd2184b-987b-7a49-a341-f8c44940bf3a@pgmasters.net
Whole thread Raw
In response to Re: Would it be possible to have parallel archiving?  (Andrey Borodin <x4mmm@yandex-team.ru>)
Responses Re: Would it be possible to have parallel archiving?
List pgsql-hackers
On 8/28/18 4:34 PM, Andrey Borodin wrote:
>>
>> I still don't think it's a good idea and I specifically recommend
>> against making changes to the archive status files- those are clearly
>> owned and managed by PG and should not be whacked around by external
>> processes.
> If you do not write to archive_status, you basically have two options:
> 1. On every archive_command recheck that archived file is identical to file that is already archived. This hurts
performance.
> 2. Hope that files match. This does not add any safety compared to whacking archive_status. This approach is prone to
core changes as writes are.
 

Another option is to maintain the state of what has been safely archived
(and what has errored) locally.  This allows pgBackRest to rapidly
return the status to Postgres without rechecking against the repository,
which as you note would be very slow.

This allows more than one archive_command to be safely run since all
archive commands must succeed before Postgres will mark the segment as done.

It's true that reading archive_status is susceptible to core changes but
the less interaction the better, I think.

Regards,
-- 
-David
david@pgmasters.net


pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: some pg_dump query code simplification
Next
From: Stephen Frost
Date:
Subject: Re: Would it be possible to have parallel archiving?