On Mon, Aug 26, 2013 at 11:02 PM, Mistina Michal
<Michal.Mistina@virte.sk> wrote:
> Hi Masao.
> Thank you for suggestion. In deed that could occure. Most probably while I
> was testing split-brain situation. In that case I turned off network card on
> one node and on both nodes DRBD was in primary role. But after the
> split-brain occurred I resync DRBD so from two primaries I promoted one as
> "primary" (winner) and second one as "secondary" (victim). Data should be
> consistent by that moment. But probably it wasn't consistent.
>
> I am using DRBD only in one technical center. Data are syncing by streaming
> replication to the secondary technical center where is another DRBD
> instance.
>
> It's like this:
>
> TC1:
> --- node1: DRBD (primary), pgsql
> --- node2: DRBD (secondary), pgsql
>
> TC2:
> --- node1: DRBD (primary), pgsql
> --- node2: DRBD (secondary), pgsql
>
> Within one technical center only one pgsql runs only on one node. This is
> done by pacemaker/corosync.
> From the outside perspective it looks like only one postgresql server is
> running in one TC.
> TC1 (master) ==== streaming replication =====> TC2 (slave)
>
> If one node in technical center fails, the fail-over to secondary node is
> really quick. It's because fast network within technical center.
> Between TC1 and TC2 there is a WAN link. If something goes wrong and TC1
> became unavailable I can switch manually / automatically to TC2.
>
> Is there more appropriate solution? Would you use something else?
Nope. I've heard the similar configuration, though it uses shared disk
failover solution instead of DRBD.
Regards,
--
Fujii Masao