In LogicalConfirmReceivedLocation two fields (data.catalog_xmin and effective_catalog_xmin) of ReplicationSlot structure are used for advancing xmin of the slot. This allows to avoid hole when tuples might already have been vacuumed, but slot's state was not yet flushed to the disk: if we crash during this hole, after restart we would assume that all tuples with xmax > old catalog_xmin are still there, while actually some of them might be already vacuumed. However, the same dodge is not used for restart_lsn advancement. This means that under somewhat unlikely circumstances it is possible that we will start decoding from segment which was already recycled, making the slot broken. Shouldn't this be fixed?
We do release the slot spinlock after updating the slot and before dirtying and flushing it. But to make the change visible, someone else would have to call ReplicationSlotsComputeRequiredLSN(). That's possible by the looks, and could be caused by a concurrent slot drop, physical slot confirmation, or another logical slot handling a concurrent confirmation.
For something to break, we'd have to
* hit this race to update XLogCtl->replicationSlotMinLSN by a concurrent call to ReplicationSlotsComputeRequiredLSN while in LogicalConfirmReceivedLocation
* have the furthest-behind slot be the one in the race window in LogicalConfirmReceivedLocation
* checkpoint here, before slot is marked dirty
* actually recycle/remove the needed xlog
* crash before writing the new slot state
Checkpoints write out slot state. But only for dirty slots. And we didn't dirty the slot while we had its spinlock, we only dirty it just before writing.
So I can't say it's definitely impossible. It seems astonishingly unlikely, but that's not always good enough.