Re: Multi-Master Logical Replication - Mailing list pgsql-hackers

From Peter Smith
Subject Re: Multi-Master Logical Replication
Date
Msg-id CAHut+Pt-B+He+ALEESboB9YjYEKKrO_jXENZH=KtX14UUFRJDA@mail.gmail.com
Whole thread Raw
In response to Re: Multi-Master Logical Replication  (Yura Sokolov <y.sokolov@postgrespro.ru>)
Responses Re: Multi-Master Logical Replication  (vignesh C <vignesh21@gmail.com>)
Re: Multi-Master Logical Replication  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
On Fri, Apr 29, 2022 at 2:16 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:
>
> В Чт, 28/04/2022 в 17:37 +0530, vignesh C пишет:
> > On Thu, Apr 28, 2022 at 4:24 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:
> > > В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:
> > >
> > > > 1.1 ADVANTAGES OF MMLR
> > > >
> > > > - Increases write scalability (e.g., all nodes can write arbitrary data).
> > >
> > > I've never heard how transactional-aware multimaster increases
> > > write scalability. More over, usually even non-transactional
> > > multimaster doesn't increase write scalability. At the best it
> > > doesn't decrease.
> > >
> > > That is because all hosts have to write all changes anyway. But
> > > side cost increases due to increased network interchange and
> > > interlocking (for transaction-aware MM) and increased latency.
> >
> > I agree it won't increase in all cases, but it will be better in a few
> > cases when the user works on different geographical regions operating
> > on independent schemas in asynchronous mode. Since the write node is
> > closer to the geographical zone, the performance will be better in a
> > few cases.
>
> From EnterpriseDB BDB page [1]:
>
> > Adding more master nodes to a BDR Group does not result in
> > significant write throughput increase when most tables are
> > replicated because BDR has to replay all the writes on all nodes.
> > Because BDR writes are in general more effective than writes coming
> > from Postgres clients via SQL, some performance increase can be
> > achieved. Read throughput generally scales linearly with the number
> > of nodes.
>
> And I'm sure EnterpriseDB does the best.
>
> > > В Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:
> > > > Dear Laurenz,
> > > >
> > > > Thank you for your interest in our works!
> > > >
> > > > > I am missing a discussion how replication conflicts are handled to
> > > > > prevent replication from breaking
> > > >
> > > > Actually we don't have plans for developing the feature that avoids conflict.
> > > > We think that it should be done as core PUB/SUB feature, and
> > > > this module will just use that.
> > >
> > > If you really want to have some proper isolation levels (
> > > Read Committed? Repeatable Read?) and/or want to have
> > > same data on each "master", there is no easy way. If you
> > > think it will be "easy", you are already wrong.
> >
> > The synchronous_commit and synchronous_standby_names configuration
> > parameters will help in getting the same data across the nodes. Can
> > you give an example for the scenario where it will be difficult?
>
> So, synchronous or asynchronous?
> Synchronous commit on every master, every alive master or on quorum
> of masters?
>
> And it is not about synchronicity. It is about determinism at
> conflicts.
>
> If you have fully determenistic conflict resolution that works
> exactly same way on each host, then it is possible to have same
> data on each host. (But it will not be transactional.)And it seems EDB BDB achieved this.
>
> Or if you have fully and correctly implemented one of distributed
> transactions protocols.
>
> [1]  https://www.enterprisedb.com/docs/bdr/latest/overview/#characterising-bdr-performance
>
> regards
>
> ------
>
> Yura Sokolov

Thanks for your feedback.

This MMLR proposal was mostly just to create an interface making it
easier to use PostgreSQL core logical replication CREATE
PUBLICATION/SUBSCRIPTION for table sharing among a set of nodes.
Otherwise, this is difficult for a user to do manually. (e.g.
difficulties as mentioned in section 2.2 of the original post [1] -
dealing with initial table data, coordinating the timing/locking to
avoid concurrent updates, getting the SUBSCRIPTION options for
copy_data exactly right etc)

At this time we have no provision for HA, nor for transaction
consistency awareness, conflict resolutions, node failure detections,
DDL replication etc. Some of the features like DDL replication are
currently being implemented [2], so when committed it will become
available in the core, and can then be integrated into this module.

Once the base feature of the current MMLR proposal is done, perhaps it
can be extended in subsequent versions.

Probably our calling this “Multi-Master” has been
misleading/confusing, because that term implies much more to other
readers. We really only intended it to mean the ability to set up
logical replication across a set of nodes. Of course, we can rename
the proposal (and API) to something different if there are better
suggestions.

------
[1] https://www.postgresql.org/message-id/CAHut%2BPuwRAoWY9pz%3DEubps3ooQCOBFiYPU9Yi%3DVB-U%2ByORU7OA%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/flat/45d0d97c-3322-4054-b94f-3c08774bbd90%40www.fastmail.com#db6e810fc93f17b0a5585bac25fb3d4b

Kind Regards,
Peter Smith.
Fujitsu Australia



pgsql-hackers by date:

Previous
From: Richard Guo
Date:
Subject: Re: Re: fix cost subqueryscan wrong parallel cost
Next
From: Bharath Rupireddy
Date:
Subject: Re: Add WAL recovery messages with log_wal_traffic GUC (was: add recovery, backup, archive, streaming etc. activity messages to server logs along with ps display)