The shell script call used in the archive_command is a bad idea.
Each time a new segment is archived a new shell is started and this add a be massive overload, then and you have an extra overload for the ssh transfer.....
I suggest you to stick with the simple cp command with the test option from the manual then transfer the archived segments in a second time using a more reliable system like rsync.
archive_command = '/var/lib/postgresql/scripts/archive_copy.sh %p %f' # command to use to archive a logfile segment
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
The archive coommand makes a local copy and then it copies to the backup server via ssh. Both copies are md5-checked and retried up to 3 times in case of failure.
I have seen under heavy load that some WALs are skipped, some have less size, some are corrupted (i,e, the loop fails 3 times).
I'm not sure about the return value (checking it). What is the expected behaviour of the archiver? Will it retry de archive if archive command returns differnt than 0? Will it retain the WAL segment until it is succesfuly archived?
German Becker wrote: > From my experience, postgres will delete WAL (after checkpoint) regardless if they have been archived. > Are you saying this is abnormal?
That would be quite abnormal. Could it be that your archive_command has exit status 0 even if something goes wrong?
What are the archive settings?
Yours, Laurenz Albe
-- Federico Campoli DE MATERIALIZING, UK, Planet Earth, The Milky Way Galaxy /******************************* There's no point being grown-up if you can't be childish sometimes. (The fourth Doctor) http://www.pgdba.co.uk *******************************/