However when the situation comes and that one slot gets behind it never recovers and no way to recover from this situation even after reading using advance ro pg_logical_get_changes sql command.
1) In our observation via PSQL the advance command as well do not move the restart_lsn immediately. It is similar to our approach that use the confirmed_flush_lsn via stream
2) I am ok to understand the point that we are not reading from the stream so we might be facing the issue. But the question is why we are able to move the restart_lsn most of the time by updating the confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too far behind.
On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com> wrote: > > Thanks Amit for the response > > We are using pgJDBC sample program here > https://jdbc.postgresql.org/documentation/head/replication.html > > the setFlushLSN is coming from the pgJDBC only. > > git hub for APIs of pgJDBC methods available. > > https://github.com/pgjdbc/pgjdbc > > The second slot refers to "private" slot. > > So ""we are not doing reading from the stream' ==> It means that we are having readPending call only from the shared slot then we get the lastReceivedLSN() from stream and > send it back to stream as confirmed_flush_lsn for both private and shared slot. We dont do readPending call to private slot. we will use private slot only when we dont have choice. It is kind of reserver slot for us. >
I think this (not performing read/decode on the private slot) could be the reason why it lagging behind. If you want to use as a reserve slot then you probably want to at least perform pg_replication_slot_advance() to move it to the required position. The restart_lsn won't move unless you read/decode from that slot.
From:
Amit Kapila Date: Subject:
Re: Cannot ship records to subscriber for partition tables using logical replication (publish_via_partition_root=false)
Есть вопросы? Напишите нам!
Соглашаюсь с условиями обработки персональных данных
✖
By continuing to browse this website, you agree to the use of cookies. Go to Privacy Policy.