Thread: WAL Shipping and streaming replication

WAL Shipping and streaming replication

From
CS DBA
Date:
All;

We have a 3 node replication setup:

Master (node1) --> Cascading Replication Node (node2)  --> Downstream
Standby node (node3)

We will be deploying WAL archiving from the master for PITR backups and
we'll use the staged WAL files in the recovery.conf files in case the
standbys need to revert to log shipping.

Question:  whats the best way to ensure consistency of WAL archiving in
the case of changes  (failover, etc)? can we setup the cascade node to
archive wals only if it's the master? is this a case where we should
deploy repmgr?

Thanks in advance




Re: WAL Shipping and streaming replication

From
Scott Marlowe
Date:
On Mon, Sep 28, 2015 at 8:48 AM, CS DBA <cs_dba@consistentstate.com> wrote:
> All;
>
> We have a 3 node replication setup:
>
> Master (node1) --> Cascading Replication Node (node2)  --> Downstream
> Standby node (node3)
>
> We will be deploying WAL archiving from the master for PITR backups and
> we'll use the staged WAL files in the recovery.conf files in case the
> standbys need to revert to log shipping.
>
> Question:  whats the best way to ensure consistency of WAL archiving in the
> case of changes  (failover, etc)? can we setup the cascade node to archive
> wals only if it's the master? is this a case where we should deploy repmgr?

Look up WAL-E. It's works really well. We tried using OmniPITR and
it's buggy and doesn't seem to get fixed very quickly (if at all).


Re: WAL Shipping and streaming replication

From
Keith Fiske
Date:


On Mon, Sep 28, 2015 at 10:54 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Mon, Sep 28, 2015 at 8:48 AM, CS DBA <cs_dba@consistentstate.com> wrote:
> All;
>
> We have a 3 node replication setup:
>
> Master (node1) --> Cascading Replication Node (node2)  --> Downstream
> Standby node (node3)
>
> We will be deploying WAL archiving from the master for PITR backups and
> we'll use the staged WAL files in the recovery.conf files in case the
> standbys need to revert to log shipping.
>
> Question:  whats the best way to ensure consistency of WAL archiving in the
> case of changes  (failover, etc)? can we setup the cascade node to archive
> wals only if it's the master? is this a case where we should deploy repmgr?

Look up WAL-E. It's works really well. We tried using OmniPITR and
it's buggy and doesn't seem to get fixed very quickly (if at all).


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


If you've encountered bugs with OmniPITR, please feel free to open an issue on Github. If you look at the issue and commit history you can see that we do indeed fix reported issues or respond to help people with problems they are having.


--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com

Re: WAL Shipping and streaming replication

From
hubert depesz lubaczewski
Date:
On Mon, Sep 28, 2015 at 08:54:54AM -0600, Scott Marlowe wrote:
> Look up WAL-E. It's works really well. We tried using OmniPITR and
> it's buggy and doesn't seem to get fixed very quickly (if at all).

Any examples? I'm developer of OmniPITR, and as far as I know there are
(currently) no unfixed bugs, and from what I can tell we fix them pretty
fast after they get reported.

depesz

--
The best thing about modern society is how easy it is to avoid contact with it.
                                                             http://depesz.com/


Re: WAL Shipping and streaming replication

From
Scott Marlowe
Date:
On Mon, Sep 28, 2015 at 9:12 AM, Keith Fiske <keith@omniti.com> wrote:
>
>
> On Mon, Sep 28, 2015 at 10:54 AM, Scott Marlowe <scott.marlowe@gmail.com>
> wrote:
>>
>> On Mon, Sep 28, 2015 at 8:48 AM, CS DBA <cs_dba@consistentstate.com>
>> wrote:
>> > All;
>> >
>> > We have a 3 node replication setup:
>> >
>> > Master (node1) --> Cascading Replication Node (node2)  --> Downstream
>> > Standby node (node3)
>> >
>> > We will be deploying WAL archiving from the master for PITR backups and
>> > we'll use the staged WAL files in the recovery.conf files in case the
>> > standbys need to revert to log shipping.
>> >
>> > Question:  whats the best way to ensure consistency of WAL archiving in
>> > the
>> > case of changes  (failover, etc)? can we setup the cascade node to
>> > archive
>> > wals only if it's the master? is this a case where we should deploy
>> > repmgr?
>>
>> Look up WAL-E. It's works really well. We tried using OmniPITR and
>> it's buggy and doesn't seem to get fixed very quickly (if at all).
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> If you've encountered bugs with OmniPITR, please feel free to open an issue
> on Github. If you look at the issue and commit history you can see that we
> do indeed fix reported issues or respond to help people with problems they
> are having.
>
> https://github.com/omniti-labs/omnipitr

The issue was reported as omnipitr-cleanup is SLOOOW, so we run
purgewal by hand, because the cleanup is so slow it can't keep up. But
running it by hand is not supported.

We fixed the problem though, we wrote out own script and are now
moving to wal-e for all future stuff.


Re: WAL Shipping and streaming replication

From
hubert depesz lubaczewski
Date:
On Mon, Sep 28, 2015 at 12:53:37PM -0600, Scott Marlowe wrote:
> The issue was reported as omnipitr-cleanup is SLOOOW, so we run
> purgewal by hand, because the cleanup is so slow it can't keep up. But
> running it by hand is not supported.
>
> We fixed the problem though, we wrote out own script and are now
> moving to wal-e for all future stuff.

where or when was it reported?
In issue list I see two issues (closed of course) for cleanup, but they
don't mention slowness.

depesz

--
The best thing about modern society is how easy it is to avoid contact with it.
                                                             http://depesz.com/