Handle infinite recursion in logical replication setup - Mailing list pgsql-hackers

 Hi,

In logical replication, currently Walsender sends the data that is
generated locally and the data that are replicated from other
instances. This results in infinite recursion in circular logical
replication setup.
Here the user is trying to have a 2-way replication setup with node 1
publishing data to node2 and node2 publishing data to node1, so that
the user can perform dml operations from any node, it can act as a
2-way multi master replication setup.

This problem can be reproduced with the following steps:
-- Instance 1
create publication pub1 for table t1;
create table t1(c1 int);

-- Instance 2
create table t1(c1 int);
create publication pub2 for table t1;
create subscription sub1 CONNECTION 'dbname=postgres port=5432'
publication pub1;

-- Instance 1
create subscription sub2 CONNECTION 'dbname=postgres port=5433'
publication pub2; insert into t1 values(10);

In this scenario, the Walsender in publisher pub1 sends data to the
apply worker in subscriber sub1, the apply worker in sub1 maps the
data to local tables and applies the individual changes as they are
received. Then the Walsender in publisher pub2 sends data to the apply
worker in subscriber sub2, the apply worker in sub2 maps the data to
local tables and applies the individual changes as they are received.
This process repeats infinitely.

Currently we do not differentiate if the data is locally generated
data, or a replicated data and we send both the data which causes
infinite recursion.

We could see that the record count has increased significantly within sometime:
select count(*) from t1;
  count
--------------
 4000000
(1 row)

If the table had primary key constraint, we could notice that the
first insert is successful and when the same insert is sent back, the
insert fails because of constraint error:
2022-02-23 09:28:43.592 IST [14743] ERROR:  duplicate key value
violates unique constraint "t1_pkey"
2022-02-23 09:28:43.592 IST [14743] DETAIL:  Key (c1)=(10) already exists.
2022-02-23 09:28:43.592 IST [14743] CONTEXT:  processing remote data
during "INSERT" for replication target relation "public.t1" in
transaction 727 at 2022-02-23 09:28:43.406738+05:30
2022-02-23 09:28:43.593 IST [14678] LOG:  background worker "logical
replication worker" (PID 14743) exited with exit code 1
2022-02-23 09:28:48.608 IST [14745] LOG:  logical replication apply
worker for subscription "sub2" has started
2022-02-23 09:28:48.624 IST [14745] ERROR:  duplicate key value
violates unique constraint "t1_pkey"
2022-02-23 09:28:48.624 IST [14745] DETAIL:  Key (c1)=(10) already exists.
2022-02-23 09:28:48.624 IST [14745] CONTEXT:  processing remote data
during "INSERT" for replication target relation "public.t1" in
transaction 727 at 2022-02-23 09:28:43.406738+05:30
2022-02-23 09:28:48.626 IST [14678] LOG:  background worker "logical
replication worker" (PID 14745) exited with exit code 1

The same problem can occur in any circular node setup like 3 nodes,
4node etc like: a) node1 publishing to node2 b) node2 publishing to
node3 c) node3 publishing back to node1.

Here there are two problems for the user: a) incremental
synchronization of table sending both local data and replicated data
by walsender b) Table synchronization of table using copy command
sending both local data and replicated data

For the first problem "Incremental synchronization of table by
Walsender" can be solved by:
Currently the locally generated data does not have replication origin
associated and the data that has originated from another instance will
have a replication origin associated. We could use this information to
differentiate locally generated data and replicated data and send only
the locally generated data. This "only_local" could be provided as an
option while subscription is created:
ex: CREATE SUBSCRIPTION sub1 CONNECTION 'dbname =postgres port=5433'
PUBLICATION pub1 with (only_local = on);

I have attached a basic patch for this, if the idea is accepted, I
will work further to test more scenarios, add documentation, and test
and post an updated patch.
For the second problem, Table synchronization of table including local
data and replicated data using copy command.

Let us consider the following scenario:
a) node1 publishing to node2 b) node2 publishing to node1. Here in
this case node1 will have replicated data from node2 and vice versa.

In the above if user wants to include node3 to subscribe data from
node2. Users will have to create a subscription in node3 to get the
data from node2. During table synchronization we send the complete
table data from node2 to node3. Node2 will have local data from node2
and also replicated data from node1. Currently we don't have an option
to differentiate between the locally generated data and replicated
data in the heap which will cause infinite recursion as described
above.

To handle this if user has specified only_local option, we could throw
a warning or error out while creating subscription in this case, we
could have a column srreplicateddata in pg_subscription_rel which
could indicate if the table has any replicated data or not:
postgres=# select * from pg_subscription_rel;
 srsubid | srrelid | srsubstate | srsublsn  | srreplicateddata
---------+---------+------------+-----------+------------------
   16389 |   16384 | r          | 0/14A4640 |        t
   16389 |   16385 | r          | 0/14A4690 |        f
(1 row)

In the above example, srreplicateddata with true indicates, tabel t1
whose relid is 16384 has replicated data and the other row having
srreplicateddata  as false indicates table t2 whose relid is 16385
does not have replicated data.
When creating a new subscription, the subscriber will connect to the
publisher and check if the relation has replicated data by checking
srreplicateddata in pg_subscription_rel table.
If the table has any replicated data, log a warning or error for this.

Also, we could document the steps on how to handle the initial sync like:
a) Complete the ongoing transactions on this table in the replication
setup nodes i.e. node1 and node2 in the above case,  so that the table
data is consistent, b) Once there are no ongoing transaction, Copy the
table data using copy command from any one of the nodes, c) create
subscription with copy_data option as off d) Perform further
transactions on the table e) All the further transactions performed
will be handled by the walsender which will take care of skipping
replicated data and sending only the local data. i.e. node2 will send
the locally generated data to node3.

I'm not sure if there is any other better way to handle this. If there
is a better way, we could handle it accordingly.
Thoughts?

Regards,
Vignesh

Attachment

pgsql-hackers by date:

Previous
From: "wangw.fnst@fujitsu.com"
Date:
Subject: RE: logical replication empty transactions
Next
From: Amit Kapila
Date:
Subject: Re: Failed transaction statistics to measure the logical replication progress