Re: Support logical replication of DDLs - Mailing list pgsql-hackers
From | Masahiko Sawada |
---|---|
Subject | Re: Support logical replication of DDLs |
Date | |
Msg-id | CAD21AoDtNMxE30ExN5b6AXriyjPpEZZtKDbgGD91udS8FBBQ_Q@mail.gmail.com Whole thread Raw |
In response to | Re: Support logical replication of DDLs (Zheng Li <zhengli10@gmail.com>) |
List | pgsql-hackers |
On Fri, May 27, 2022 at 7:19 AM Zheng Li <zhengli10@gmail.com> wrote: > > Hi Masahiko, > > > Thank you for updating the patches! > > > > I've not looked at these patches in-depth yet but with this approach, > > what do you think we can handle the DDL syntax differences between > > major versions? DDL syntax or behavior could be changed by future > > changes and I think we need to somehow deal with the differences. For > > > example, if the user uses logical replication for major version > > upgrade, the publisher is older than the subscriber. We might have to > > rewrite the DDL before applying to the subscriber because the DDL > > executed on the publisher no longer work on a new PostgreSQL version > > I don't think we will allow this kind of situation to happen in the > first place for > backward compatibility. It seems like a big limitation to me. > If a DDL no longer works on a new version of > PostgreSQL, the user will have to change the application code as well. > So even if it happens for > whatever reason, we could either > 1. fail the apply worker and let the user fix such DDL because they'll > have to fix the application code anyway when this happens. Once the apply worker received the DDL, if the DDL doesn't work on the subscriber, it will enter an infinite loop until the problem is fixed. If the failure is due to a syntax error, how does the user fix it? > 2. add guard rail logic in the apply worker to automatically fix such > DDL if possible, knowing the version of the source and target. Similar > logic must have been implemented for pg_dump/restore/upgrade. If I'm not missing something, there is no such implementation in pg_dump/restore/upgrade. When we use pg_dump/pg_restore for major version upgrades, we usually use the newer version pg_dump to fetch objects from the older version server, then restore the objects by using the newer version pg_restore. > > > or we might have to add some options to the DDL before the application > > in order to keep the same behavior. This seems to require a different > > solution from what the patch does for the problem you mentioned such > > > as "DDL involving multiple tables where only some tables are > > replicated”. > > First of all, this case can only happen when the customer chooses to > only replicate a subset of the tables in a database in which case > table level DDL replication is chosen instead of database level DDL > replication (where all tables > and DDLs are replicated). I think the solution would be: > 1. make best effort to detect such DDLs on the publisher and avoid > logging of such DDLs in table level DDL replication. I think it's better to support this case. > 2. apply worker will fail to replay such command due to missing > objects if such DDLs didn't get filtered on the publisher for some > reason. This should be rare and I think it's OK even if it happens, > we'll find out > why and fix it. I'm not sure it's rare since replicating a subset of tables is a common use case of logical replication. But even if we want to go this way I think we should consider how to fix it at this stage, otherwise we will end up redesigning it. Regards, -- Masahiko Sawada EDB: https://www.enterprisedb.com/
pgsql-hackers by date: