Re: logical changeset generation v6.2 - Mailing list pgsql-hackers

From Andres Freund
Subject Re: logical changeset generation v6.2
Date
Msg-id 20131029144758.GC21284@awork2.anarazel.de
Whole thread Raw
In response to Re: logical changeset generation v6.2  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: logical changeset generation v6.2  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 2013-10-28 11:54:31 -0400, Robert Haas wrote:
> > There's one snag I currently can see, namely that we actually need to
> > prevent that a formerly dropped relfilenode is getting reused. Not
> > entirely sure what the best way for that is.
> 
> I'm not sure in detail, but it seems to me that this all part of the
> same picture.  If you're tracking changed relfilenodes, you'd better
> track dropped ones as well.

What I am thinking about is the way GetNewRelFileNode() checks for
preexisting relfilenodes. It uses SnapshotDirty to scan for existing
relfilenodes for a newly created oid. Which means already dropped
relations could be reused.
I guess it could be as simple as using SatisfiesAny (or even better a
wrapper around SatisfiesVacuum that knows about recently dead tuples).

> Completely aside from this issue, what
> keeps a relation from being dropped before we've decoded all of the
> changes made to its data before the point at which it was dropped?  (I
> hope the answer isn't "nothing".)

Nothing. But there's no need to prevent it, it'll still be in the
catalog and we don't ever access a non-catalog relation's data during
decoding.

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: CLUSTER FREEZE
Next
From: Leonardo Francalanci
Date:
Subject: Re: Fast insertion indexes: why no developments