Re: Questions on logical replication - Mailing list pgsql-general
From | Justin |
---|---|
Subject | Re: Questions on logical replication |
Date | |
Msg-id | CALL-XeNt1R+mfAtKYOnht1dMwf2_KCbJ8Q-mcJ8OcDnWXHaT1w@mail.gmail.com Whole thread Raw |
In response to | Re: Questions on logical replication (Koen De Groote <kdg.dev@gmail.com>) |
List | pgsql-general |
On Thu, Jun 13, 2024 at 6:01 AM Koen De Groote <kdg.dev@gmail.com> wrote:
> Why? what benefit does this provide you?? Add all the tables when creating the publication and be done with it... I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplishAdding all tables at once means adding the gigantic tables as well. Disk IO and Network traffic are a serious concern, increased CPU usage affecting queries of the live system, as well as transaction wraparound.Initial sync can be a serious concern, depending on the size of the table.
The number of initial sync workers can be controlled via max_sync_workers_per_subscription
see https://www.postgresql.org/docs/current/logical-replication-config.html
if you want to do one table at a time just set sync workers to 1.
If bandwidth is a problem either from the disk or network, direct the network traffic from the subscriber through a proxy or firewall to throttle the network speed. Slowing the copy will cause the WAL to build up on the publisher
CPU load on the publisher is very low its actually hard to see it doing anything as its just reading the disk, streaming it to the subscriber..
For large tables with lots of indexes for the copy to complete as fast as possible to prevent WAL build up, drop indexes. For me the WAL build up has only been an issue when dealing with multi-TB sized tables when it takes several days to copy the data for one table.
see https://www.postgresql.org/docs/current/logical-replication-config.html
if you want to do one table at a time just set sync workers to 1.
If bandwidth is a problem either from the disk or network, direct the network traffic from the subscriber through a proxy or firewall to throttle the network speed. Slowing the copy will cause the WAL to build up on the publisher
CPU load on the publisher is very low its actually hard to see it doing anything as its just reading the disk, streaming it to the subscriber..
For large tables with lots of indexes for the copy to complete as fast as possible to prevent WAL build up, drop indexes. For me the WAL build up has only been an issue when dealing with multi-TB sized tables when it takes several days to copy the data for one table.
One trick is to remove all the indexes during the initial sync except for the primary key so the subscriber has less work to do.
Here's a nice guide where people did a logical replication upgrade, explaining why they did it this way: https://knock.app/blog/zero-downtime-postgres-upgrades
The blog suggests overly complicated things. only doing 100GB chunks of data at one time. Maybe if the publisher was scarce on resources or the table is multi-TB in size it requires days to weeks to copy...
If the publisher is so low on resources that Logical Replication is problematic one can create a binary replica, promote it and convert it to logical replication skipping the initial sync. Then upgrade that server. There is a minor outage required to convert a binary replica to a logical replica. I've done it in under 30 seconds.
If the publisher is so low on resources that Logical Replication is problematic one can create a binary replica, promote it and convert it to logical replication skipping the initial sync. Then upgrade that server. There is a minor outage required to convert a binary replica to a logical replica. I've done it in under 30 seconds.
On Wed, Jun 12, 2024 at 7:01 PM Justin <zzzzz.graf@gmail.com> wrote:On Tue, Jun 11, 2024 at 5:43 PM Koen De Groote <kdg.dev@gmail.com> wrote:> If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete this will cause the main subscriber worker slot on the publisher to start backing up WAL filesAnd also if the connection breaks, from what I understand, is that correct? Anything that stops the subscription, including disabling the subscription, is that right?Yes to all....> I suggest confirming all tables have replica identities or primary keys before going any further.Yes, I am aware of this. I made me a small script that prints which tables I have added to the publication and are done syncing, and which are currently not being replicated.> With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and earlier.I'm also aware of this. My plan is to create a publication with no tables, and add them 1 by 1, refreshing the subscriber each time.Why? what benefit does this provide you?? Add all the tables when creating the publication and be done with it... I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplishI'm not planning on using "REPLICA IDENTITY FULL" anywhere.Good
pgsql-general by date: