Re: PostgreSQL12 crash bug report - Mailing list pgsql-bugs

From Andres Freund
Subject Re: PostgreSQL12 crash bug report
Date
Msg-id 20190801043658.4hqm7k3iarmq3q7i@alap3.anarazel.de
Whole thread Raw
In response to Re: PostgreSQL12 crash bug report  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: PostgreSQL12 crash bug report  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
Hi,

On 2019-07-31 22:37:40 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2019-07-31 19:07:29 -0400, Tom Lane wrote:
> >> It's setting up an es_epqTupleSlot[] entry on the assumption that it
> >> should have the same tupdesc as the output tuple that's to be rechecked.
> >> This might accidentally fail to malfunction in single-table cases,
> >> but it's quite broken for any join case --- in particular, for the
> >> given test case, the scan tuple for the VALUES node surely doesn't have
> >> the same tupdesc as the join result.
> 
> > To make sure I understand - the problem isn't the slot that we've
> > created in nodeModifyTable.c via EvalPlanQualSlot(), right? It's the one
> > we create in EvalPlanQualFetchRowMarks(), because we don't have a proper
> > tuple type handy to create the slot with?
> 
> Yeah, I think nodeModifyTable.c is fine, because it always passes the
> target relation.  EvalPlanQualFetchRowMarks is not fine, and I'm unsure
> about the call in nodeLockRows.c.

I think nodeLockRows.c itself should be fine - the slots it gets with
EvalPlanQualSlot() are either used for the FDWs to store tuples in it,
or nodeLockRows.c directly fetches tuples into them with
table_tuple_lock(). It itself is only dealing with locking rowmarks, for
which we obviously have a relation.  It does trigger
EvalPlanQualFetchRowMarks() for the non-locking marks - that part is
obviously broken, but not in nodeLockRows.c itself.


> > Previously we simply didn't need to know the type during EPQ setup,
> > because we only stored a HeapTuple anyway. And we'd know the appropriate
> > tupledesc at the places that access the tuple.
> 
> Right.  So we gotta refactor that somehow.

One way to avoid doing so would be to just not deal with slots in the
case of ROW_MARK_COPY. The reason why EPQ was made to use to use slots
instead of HeapTuples is that not doing so would require expensive
conversions for AMs that don't use HeapTuple.  But for COPY rowmarks
that's not an issue, because the source already is a HeapTuple (in the
form of a Datum).

That's imo not a particularly pretty approach, but might be the most
viable approach for v12. Attached is a quick WIP implementation, which
indeed fixes the crash reported here, and passes all our
tests. Obviously it'd need some polish if we went for that.



I think it's pretty darn ugly that we don't know what kind of tuples
we're dealing with at the top-level of EPQ. But changing that seems far
from trivial.

The way things set up we don't have a good way to map from rtis to the
associated PlanStates, and currently only the PlanStates have enough
information to infer the slot type.  In fact, if we knew the ScanState
nodes for specific rtis, we could perhaps even just use each node's
ScanState->ss_ScanTupleSlot - seems fragile, but we've effectively done
so before v12.

And we can't easy add a way to map from rti to PlanState, without adding
work to ExecInit* for every Scan node. If ExecInitNode() knew whether
the relevant nodes were Scan nodes, we could maintain the mapping at
that stage, but we afaict don't know that.

I briefly looked at hacking something like ExecInitScanTupleSlot() to
maintain an EState rti->ScanState mapping, but we have ScanState
executor nodes for planner nodes that do *not* inherit from Scan
(e.g. Material). While that's ugly, it's not something we seem likely to
change all that soon - and using ExecInitScanTupleSlot() as a proxy is
just a hack anyway.  It sure would be nice if IsA() actually worked when
inheriting from a node...

We could easily maintain a mapping from Plan->plan_node_id to the
corresponding PlanState from within ExecInitNode(). But I don't think
that'd help that much, as we'd need an rti -> plan_node_id mapping,
which'd again not be trivial to maintain. We could do so in
set_plan_refs(), as that knows both the relevant plan_node_id, and the
adjusted scanrelid.

I'm not exactly sure what the intended uses of plan_node_id are - it was
introduce in d1b7c1ffe72e86932b5395f29e006c3f503bc53d and is,
exclusively?, used as a unique key into dynamic shared memory for
parallel query (so each node can cheaply can build a key for its own
data). So perhaps it'd be a gross misuse of that system. But it doesn't
look like that to me.

Not sure if there's other things that could benefit from the
rti -> ScanState or plan_node_id -> PlanState mapping. It'd be nice for
debugging, I guess, but that doesn't quite seem enough.


> > One bigger change - but possibly one worth it - would be to basically
> > move the work done in EvalPlanQualFetchRowMarks() to be done on-demand,
> > at least for ROW_MARK_COPY.
> 
> Hm.  Too tired to think that through right now.

I briefly tried to implement that. The problem is that, as things are
currently set up, we don't have access to the corresponding epqstate
from within executor nodes. That prevents us from accessing
EPQState->{origslot, arowMarks}, which we need to actually be able to
fetch the rows to return.

We could solve that by referencing the EPQState from its EState (but not
from the "main" estate). I've wondered before whether that'd be better
than having various es_epq* fields in EState.  I don't think it'd make
the modularity any worse, given that we're already referencing EPQstate
fields from with EState.  If we moved es_epqTupleSlot into EPQState and
allocated it from within EvalPlanQualInit(), instead of
EvalPlanQualBegin(), we'd address your complaint that we now call that
earlier too.

Doesn't sound that hard. Seems like a big change for v12 though.


As far as I can tell, the only behavioural change of fetching rows later
instead of in EvalPlanQualFetchRowMarks would be that we'd fetch
(locally or via FDW) rows that are referenced, but not locked, at the
same time we lock rows, and that we would not fetch them if any of the
locked rows couldn't be locked.  Which seems like a mild improvement.

Greetings,

Andres Freund

Attachment

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: PostgreSQL12 crash bug report
Next
From: PG Bug reporting form
Date:
Subject: BUG #15936: user defined data type isn't picking default value