> In general, your handling of WAL files seems fragile and error-prone....
Indeed. I would recommend simply using rsync to handle pushing the
files. I see several advantages:
1. Distributed load - you aren't copying a full-day of files all at once.
2. Very easy to set-up - you can use it directly as your archive_command
if you wish.
3. Atomic. Rsync copies new data to a temporary location that will only
be moved into place when the transfer is complete. The destination
server will never see a partial file. Depending on the FTP client/server
combo, you will likely end up with a partial file in the event of
communication failure.
4. Much more up-to-the-minute recovery data.
In your scenario, what about using "cp -l" (or "ln") instead? Since the
hard-link it is only creating a new pointer, it will be very fast and
save a bunch of disk IO on your server and it doesn't appear that the
tempdir is for much other than organizing purposes anyway.
I'm setting up some test machines to learn more about PITR and warm
backups and am considering a two-stage process using "cp -l" to add the
file to the list needing transfer and regular rsync to actually move the
files to the destination machine. (The destination machine will be over
a WAN link so I'd like to avoid having PG tied up waiting for each rsync
to complete.)
Cheers,
Steve