Greetings,
* hubert depesz lubaczewski (depesz@depesz.com) wrote:
> I'm in a situation where we quite often generate more WAL than we can
> archive. The thing is - archiving takes long(ish) time but it's
> multi-step process and includes talking to remote servers over network.
>
> I tested that simply by running archiving in parallel I can easily get
> 2-3 times higher throughput.
>
> But - I'd prefer to keep postgresql knowing what is archived, and what
> not, so I can't do the parallelization on my own.
>
> So, the question is: is it technically possible to have parallel
> archivization, and would anyone be willing to work on it (sorry, my
> c skills are basically none, so I can't realistically hack it myself)
Not entirely sure what the concern is around "postgresql knowing what is
archived", but pgbackrest already does exactly this parallel archiving
for environments where the WAL volume is larger than a single thread can
handle, and we've been rewriting it in C specifically to make it fast
enough to be able to keep PG up-to-date regarding what's been pushed
already.
Happy to discuss it further, as well as other related topics and how
backup software could be given better APIs to tell PG what's been
archived, etc.
Thanks!
Stephen