Re: long-standing data loss bug in initial sync of logical replication - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: long-standing data loss bug in initial sync of logical replication
Date
Msg-id CAA4eK1Lges1uiEZiuoxQ8ZyzH+wnGuw5U-ksBNDX=v6H4krkZQ@mail.gmail.com
Whole thread Raw
In response to Re: long-standing data loss bug in initial sync of logical replication  (vignesh C <vignesh21@gmail.com>)
Responses Re: long-standing data loss bug in initial sync of logical replication
List pgsql-hackers
On Wed, Jul 17, 2024 at 5:25 PM vignesh C <vignesh21@gmail.com> wrote:
>
> On Wed, 17 Jul 2024 at 11:54, Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > On Tue, Jul 16, 2024 at 6:54 PM vignesh C <vignesh21@gmail.com> wrote:
> >
> > BTW, I noticed that we don't take any table-level locks for Create
> > Publication .. For ALL TABLES (and Drop Publication). Can that create
> > a similar problem? I haven't tested so not sure but even if there is a
> > problem for the Create case, it should lead to some ERROR like missing
> > publication.
>
> I tested these scenarios, and as you expected, it throws an error for
> the create publication case:
> 2024-07-17 14:50:01.145 IST [481526] 481526  ERROR:  could not receive
> data from WAL stream: ERROR:  publication "pub1" does not exist
>         CONTEXT:  slot "sub1", output plugin "pgoutput", in the change
> callback, associated LSN 0/1510CD8
> 2024-07-17 14:50:01.147 IST [481450] 481450  LOG:  background worker
> "logical replication apply worker" (PID 481526) exited with exit code
> 1
>
> The steps for this process are as follows:
> 1) Create tables in both the publisher and subscriber.
> 2) On the publisher: Create a replication slot.
> 3) On the subscriber: Create a subscription using the slot created by
> the publisher.
> 4) On the publisher:
> 4.a) Session 1: BEGIN; INSERT INTO T1;
> 4.b) Session 2: CREATE PUBLICATION FOR ALL TABLES
> 4.c) Session 1: COMMIT;
>
> Since we are throwing out a "publication does not exist" error, there
> is no inconsistency issue here.
>
> However, an issue persists with DROP ALL TABLES publication, where
> data continues to replicate even after the publication is dropped.
> This happens because the open transaction consumes the invalidation,
> causing the publications to be revalidated using old snapshot. As a
> result, both the open transactions and the subsequent transactions are
> getting replicated.
>
> We can reproduce this issue by following these steps in a logical
> replication setup with an "ALL TABLES" publication:
> On the publisher:
> Session 1: BEGIN; INSERT INTO T1 VALUES (val1);
> In another session on the publisher:
> Session 2: DROP PUBLICATION
> Back in Session 1 on the publisher:
> COMMIT;
> Finally, in Session 1 on the publisher:
> INSERT INTO T1 VALUES (val2);
>
> Even after dropping the publication, both val1 and val2 are still
> being replicated to the subscriber. This means that both the
> in-progress concurrent transaction and the subsequent transactions are
> being replicated.
>
> I don't think locking all tables is a viable solution in this case, as
> it would require asking the user to refrain from performing any
> operations on any of the tables in the database while creating a
> publication.
>

Indeed, locking all tables in the database to prevent concurrent DMLs
for this scenario also looks odd to me. The other alternative
previously suggested by Andres is to distribute catalog modifying
transactions to all concurrent in-progress transactions [1] but as
mentioned this could add an overhead. One possibility to reduce
overhead is that we selectively distribute invalidations for
catalogs-related publications but I haven't analyzed the feasibility.

We need more opinions to decide here, so let me summarize the problem
and solutions discussed. As explained with an example in an email [1],
the problem related to logical decoding is that it doesn't process
invalidations corresponding to DDLs for the already in-progress
transactions. We discussed preventing DMLs in the first place when
concurrent DDLs like ALTER PUBLICATION ... ADD TABLE ... are in
progress. The solution discussed was to acquire
ShareUpdateExclusiveLock for all the tables being added via such
commands. Further analysis revealed that the same handling is required
for ALTER PUBLICATION ... ADD TABLES IN SCHEMA which means locking all
the tables in the specified schemas. Then DROP PUBLICATION also seems
to have similar symptoms which means in the worst case (where
publication is for ALL TABLES) we have to lock all the tables in the
database. We are not sure if that is good so the other alternative we
can pursue is to distribute invalidations in logical decoding
infrastructure [1] which has its downsides.

Thoughts?

[1] - https://www.postgresql.org/message-id/20231118025445.crhaeeuvoe2g5dv6%40awork3.anarazel.de

--
With Regards,
Amit Kapila.



pgsql-hackers by date:

Previous
From: Sutou Kouhei
Date:
Subject: Re: Make COPY format extendable: Extract COPY TO format implementations
Next
From: Christophe Pettus
Date:
Subject: Re: CI, macports, darwin version problems