Re: Two-phase update of restart_lsn in LogicalConfirmReceivedLocation - Mailing list pgsql-hackers

From Craig Ringer
Subject Re: Two-phase update of restart_lsn in LogicalConfirmReceivedLocation
Date
Msg-id CAMsr+YFrQDe-Gx1REy9wp7E2s6J9zk=kjOvd8ciXv_v2f5cNqw@mail.gmail.com
Whole thread Raw
In response to Two-phase update of restart_lsn in LogicalConfirmReceivedLocation  (Arseny Sher <a.sher@postgrespro.ru>)
Responses Re: Two-phase update of restart_lsn in LogicalConfirmReceivedLocation  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 1 March 2018 at 13:39, Arseny Sher <a.sher@postgrespro.ru> wrote:
Hello,

In LogicalConfirmReceivedLocation two fields (data.catalog_xmin and
effective_catalog_xmin) of ReplicationSlot structure are used for
advancing xmin of the slot. This allows to avoid hole when tuples might
already have been vacuumed, but slot's state was not yet flushed to the
disk: if we crash during this hole, after restart we would assume that
all tuples with xmax > old catalog_xmin are still there, while actually
some of them might be already vacuumed. However, the same dodge is not
used for restart_lsn advancement. This means that under somewhat
unlikely circumstances it is possible that we will start decoding from
segment which was already recycled, making the slot broken. Shouldn't
this be fixed?

You mean the small window between 

            MyReplicationSlot->data.restart_lsn = MyReplicationSlot->candidate_restart_lsn;


and 

            ReplicationSlotMarkDirty();
            ReplicationSlotSave();

in LogicalConfirmReceivedLocation ?

We do release the slot spinlock after updating the slot and before dirtying and flushing it. But to make the change visible, someone else would have to call ReplicationSlotsComputeRequiredLSN(). That's possible by the looks, and could be caused by a concurrent slot drop, physical slot confirmation, or another logical slot handling a concurrent confirmation.

For something to break, we'd have to

* hit this race to update XLogCtl->replicationSlotMinLSN  by a concurrent call to ReplicationSlotsComputeRequiredLSN while in LogicalConfirmReceivedLocation
* have the furthest-behind slot be the one in the race window in LogicalConfirmReceivedLocation
* checkpoint here, before slot is marked dirty
* actually recycle/remove the needed xlog
* crash before writing the new slot state

Checkpoints write out slot state. But only for dirty slots. And we didn't dirty the slot while we had its spinlock, we only dirty it just before writing.

So I can't say it's definitely impossible. It seems astonishingly unlikely, but that's not always good enough.

--
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

pgsql-hackers by date:

Previous
From: Fabien COELHO
Date:
Subject: Re: [HACKERS] pgbench randomness initialization
Next
From: "Tsunakawa, Takayuki"
Date:
Subject: RE: [bug fix] pg_rewind creates corrupt WAL files, and the standbycannot catch up the primary