On Fri, Jun 14, 2024 at 12:10 AM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Thu, May 23, 2024 at 2:37 AM shveta malik <shveta.malik@gmail.com> wrote:
> > c) update_deleted: The row with the same value as that incoming
> > update's key does not exist. The row is already deleted. This conflict
> > type is generated only if the deleted row is still detectable i.e., it
> > is not removed by VACUUM yet. If the row is removed by VACUUM already,
> > it cannot detect this conflict. It will detect it as update_missing
> > and will follow the default or configured resolver of update_missing
> > itself.
>
> I think this design is categorically unacceptable. It amounts to
> designing a feature that works except when it doesn't. I'm not exactly
> sure how the proposal should be changed to avoid depending on the
> timing of VACUUM, but I think it's absolutely not OK to depend on the
> timing of VACUUm -- or, really, this is going to depend on the timing
> of HOT-pruning, which will often happen almost instantly.
>
Agreed, above Tomas has speculated to have a way to avoid vacuum
cleaning dead tuples until the required changes are received and
applied. Shveta also mentioned another way to have deads-store (say a
table where deleted rows are stored for resolution) [1] which is
similar to a technique used by some other databases. There is an
agreement to not rely on Vacuum to detect such a conflict but the
alternative is not clear. Currently, we are thinking to consider such
a conflict type as update_missing (The row with the same value as that
incoming update's key does not exist.). This is how the current HEAD
code behaves and LOGs the information (logical replication did not
find row to be updated ..).
[1] - https://www.postgresql.org/message-id/CAJpy0uCov4JfZJeOvY0O21_gk9bcgNUDp4jf8%2BBbMp%2BEAv8cVQ%40mail.gmail.com
--
With Regards,
Amit Kapila.