On 08/04/2016 12:55 PM, Patrick B wrote:
> @Adrian,
>
>
> Seems to me the settings for nice and ionice above would, on a busy
> machine, slow down the transfer. Has there always been a notable
> time difference in the transfer or has it gotten worse over time?
>
> Yep... I also thought about that. Specially because the master is
> constantly getting 100% of IO (we use SATA disks still)...
>
> I'm thinking about removing that `ionice` command... I don't need to
> restart Postgres eh?? Just reload the confs?
https://www.postgresql.org/docs/9.5/static/continuous-archiving.html#BACKUP-ARCHIVING-WAL
"However, archive_command can be changed with a configuration file reload."
>
>
> @John R Pierce,
>
> normally, you would ship the archived wal files to a file server via
> cp-over-nfs or scp, and have the slaves access them as needed via
> the recovery.conf
>
> What if the NFS server goes down? Networking goes down? We have had that
> kind of problem in the past, that's why I'm shipping the wal_files to
> each slave, separately. Also, to have an extra copy of them.
>
>
> @Venkata Balaji N,
>
>
> Not sure why the script is so complex. Do you see any messages in
> the postgresql log file on master ? and on slave ? which indicates
> the reason for delayed shipping of WAL archives. Did you notice any
> network level issues ?
>
> Yes the script is complex.. I've hidden almost all of it for privacy
> purpose.. sorry....
>
> I don't see any messages on the log files... not on the master and not
> on the slaves as well. I just see the message of the wal_files
> being successfully shipped to the slaves.
>
> Also, no networking level issues.. because I got four slaves with
> streaming replication and all of them are working fine... also, my
> backup server has never failed... so no networking issues.
>
>
> Thanks,
>
> Patrick
>
>
>
--
Adrian Klaver
adrian.klaver@aklaver.com