Re: row filtering for logical replication - Mailing list pgsql-hackers

From Rahila Syed
Subject Re: row filtering for logical replication
Date
Msg-id CAH2L28tvKGLO9ZRFsaA5msXqZzF6wHgxgXNvnCQ2dRQfs1T8XQ@mail.gmail.com
Whole thread Raw
In response to Re: row filtering for logical replication  ("Euler Taveira" <euler@eulerto.com>)
Responses Re: row filtering for logical replication
List pgsql-hackers
Hi Euler,

While running some tests on v13 patches, I noticed that, in case the published table data 
already exists on the subscriber database before creating the subscription, at the time of
CREATE subscription/table synchronization, an error as seen as follows 

With the patch:

2021-03-29 14:32:56.265 IST [78467] STATEMENT:  CREATE_REPLICATION_SLOT "pg_16406_sync_16390_6944995860755251708" LOGICAL pgoutput USE_SNAPSHOT
2021-03-29 14:32:56.279 IST [78467] LOG:  could not send data to client: Broken pipe
2021-03-29 14:32:56.279 IST [78467] STATEMENT:  COPY (SELECT aid, bid, abalance, filler FROM public.pgbench_accounts WHERE (aid > 0)) TO STDOUT
2021-03-29 14:32:56.279 IST [78467] FATAL:  connection to client lost
2021-03-29 14:32:56.279 IST [78467] STATEMENT:  COPY (SELECT aid, bid, abalance, filler FROM public.pgbench_accounts WHERE (aid > 0)) TO STDOUT
2021-03-29 14:33:01.302 IST [78470] LOG:  logical decoding found consistent point at 0/4E2B8460
2021-03-29 14:33:01.302 IST [78470] DETAIL:  There are no running transactions.

Without the patch:

2021-03-29 15:05:01.581 IST [79029] ERROR:  duplicate key value violates unique constraint "pgbench_branches_pkey"
2021-03-29 15:05:01.581 IST [79029] DETAIL:  Key (bid)=(1) already exists.
2021-03-29 15:05:01.581 IST [79029] CONTEXT:  COPY pgbench_branches, line 1
2021-03-29 15:05:01.583 IST [78538] LOG:  background worker "logical replication worker" (PID 79029) exited with exit code 1
2021-03-29 15:05:06.593 IST [79031] LOG:  logical replication table synchronization worker for subscription "test_sub2", table "pgbench_branches" has started

Without the patch the COPY command throws an ERROR, but with the patch, a similar scenario results in client connection being lost.

I didn't investigate it more, but looks like we should maintain the existing behaviour when table synchronization fails
due to duplicate data. 

Thank you,
Rahila Syed

pgsql-hackers by date:

Previous
From: Markus Wanner
Date:
Subject: Re: [PATCH] Provide more information to filter_prepare
Next
From: Etsuro Fujita
Date:
Subject: Re: Asynchronous Append on postgres_fdw nodes.