Re: row filtering for logical replication - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: row filtering for logical replication
Date
Msg-id CAFiTN-tZ6TT+hJt-MKwtumx62r1wnAVdDDuFyNNA==kLTsVgjQ@mail.gmail.com
Whole thread Raw
In response to Re: row filtering for logical replication  (Amit Kapila <amit.kapila16@gmail.com>)
Responses Re: row filtering for logical replication  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Re: row filtering for logical replication  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Mon, Jul 19, 2021 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> a. Just log it and move to the next row
> b. send to stats collector some info about this which can be displayed
> in a view and then move ahead
> c. just skip it like any other row that doesn't match the filter clause.
>
> I am not sure if there is any use of sending a row if one of the
> old/new rows doesn't match the filter. Because if the old row doesn't
> match but the new one matches the criteria, we will anyway just throw
> such a row on the subscriber instead of applying it.

But at some time that will be true even if we skip the row based on
(a) or (c) right.  Suppose the OLD row was not satisfying the
condition but the NEW row is satisfying the condition, now even if we
skip this operation then in the next operation on the same row even if
both OLD and NEW rows are satisfying the filter the operation will
just be dropped by the subscriber right? because we did not send the
previous row when it first updated to value which were satisfying the
condition.  So basically, any row is inserted which did not satisfy
the condition first then post that no matter how many updates we do to
that row either it will be skipped by the publisher because the OLD
row was not satisfying the condition or it will be skipped by the
subscriber as there was no matching row.

> > Maybe a second option is to have replication change any UPDATE into
> > either an INSERT or a DELETE, if the old or the new row do not pass the
> > filter, respectively.  That way, the databases would remain consistent.

Yeah, I think this is the best way to keep the data consistent.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Ibrar Ahmed
Date:
Subject: Re: Improve join selectivity estimation using extended statistics
Next
From: tushar
Date:
Subject: Re: refactoring basebackup.c