Greetings,
* Andrey Borodin (x4mmm@yandex-team.ru) wrote:
> > 28 авг. 2018 г., в 17:07, Stephen Frost <sfrost@snowman.net> написал(а):
> > I still don't think it's a good idea and I specifically recommend
> > against making changes to the archive status files- those are clearly
> > owned and managed by PG and should not be whacked around by external
> > processes.
> If you do not write to archive_status, you basically have two options:
> 1. On every archive_command recheck that archived file is identical to file that is already archived. This hurts
performance.
It's absolutely important to make sure that the files PG is asking to
archive have actually been archived, yes.
> 2. Hope that files match. This does not add any safety compared to whacking archive_status. This approach is prone to
core changes as writes are.
This blindly assumes that PG won't care about some other process
whacking around archive status files and I don't think that's a good
assumption to be operating under, and certainly not under the claim
that it's simply a 'performance' improvement.
> Well, PostgreSQL clearly have the problem which can be solved by good parallel archiving API. Anything else - is
whackingaround, just reading archive_status is nothing better that reading and writing.
Pushing files which are indicated by archive status as being ready is
absolutely an entirely different thing from whacking around the status
files themselves which PG is managing itself.
Thanks!
Stephen