I wrote:
> I'm still wondering though why Yura is observing resources remaining
> held by an executed-to-completion Portal. I think investigating that
> might be more useful than tinkering with pipeline mode.
I got a chance to look into this finally. The lens I've been looking
at this through is "why are we still holding any buffer pins when
ExecutorRun finishes?". Normal table scan nodes won't do that.
It turns out that the problem is specific to SELECT FOR UPDATE, and
it happens because nodeLockRows is not careful to shut down the
EvalPlanQual mechanism it uses before returning NULL at the end of
a scan. If EPQ has been fired, it'll be holding a tuple slot
referencing whatever tuple it was last asked about. The attached
trivial patch seems to take care of the issue nicely, while adding
little if any overhead. (A repeat call to EvalPlanQualEnd doesn't
do much.)
regards, tom lane
diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c
index b2e5c30079..7583973f4a 100644
--- a/src/backend/executor/nodeLockRows.c
+++ b/src/backend/executor/nodeLockRows.c
@@ -59,7 +59,11 @@ lnext:
slot = ExecProcNode(outerPlan);
if (TupIsNull(slot))
+ {
+ /* Release any resources held by EPQ mechanism before exiting */
+ EvalPlanQualEnd(&node->lr_epqstate);
return NULL;
+ }
/* We don't need EvalPlanQual unless we get updated tuple version(s) */
epq_needed = false;
@@ -381,6 +385,7 @@ ExecInitLockRows(LockRows *node, EState *estate, int eflags)
void
ExecEndLockRows(LockRowsState *node)
{
+ /* We may have shut down EPQ already, but no harm in another call */
EvalPlanQualEnd(&node->lr_epqstate);
ExecEndNode(outerPlanState(node));
}