On 11/20/14, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
> On 11/20/2014 12:30 PM, zach cruise wrote:
>>>
>>> For more info see:
>>>
>>> http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html
>> to be clear- i change my 2 VMs setup {"1. master (dev) - 2. slave
>> (prod) setup"} to 3 VMs {"1. master (dev) - 2. slave (prod) setup - 3.
>> archive (wal)"}.
>>
>> but what do i gain?
>
> Extra protection against failure, maybe.
>
> So:
>
> ---> WAL Archive ---
> | |
> | Streaming |
> master --- --------------------> slave
>
> If the direct link between the master and slave goes down, the slave can
> still get WALs from the archive. If the archive machine goes down you
> still have the direct link. If you take the slave down the master can
> still push WALs to the archive. This assumes the 'machines' are actually
> separated and connecting through different networks. You say you are
> using VMs, but not where they are running. If they are all running on
> the same machine running through the same network link then you really
> do not have protection against network issues. The same if the host
> machine goes down. This is one of those pen and paper times, when you
> sketch out the arrangement and start doing what ifs.
master, slave and archive can be 3 separate VMs on 1 host, with their
clones on 2nd and 3rd hosts.
a follow-up question on WAL recycling: ("When WAL archiving is being
done, the log segments must be archived before being recycled or
removed" from http://www.postgresql.org/docs/9.3/static/wal-configuration.html)
say streaming is off-
* if both master and archive are down, slave is still up and running. yes?
* if master writes data when archive is down, it will copy over to
slave when archive is back up. yes?
* but if WAL is recycled before archive is back up, it will not copy
over to slave. yes?
see my concern with a separate archive is if archive is down and
master gets stuck retrying to push the same segment again and again,
there may be a problem in recovery when archive is back up. no?