My original idea for using the new "memory context" mechanisms for
recovering memory in the executor went like this: in each Plan node,
create a "per tuple" context that would be reset at the start of each
ExecProcNode call, thereby recovering memory allocated in the previous
tuple cycle. I envisioned resetting and switching into this context at
the start of each call of the node's ExecProcNode routine.
This idea has pretty much crashed and burned on takeoff :-(. It turns
out there are way too many plan-level routines that assume they can
do palloc() to allocate memory that will still be there the next time
they are called. An example is that rtree index scans use a stack of
palloc'd nodes to keep track of where they are ... and that stack had
better still be there when you ask for the next tuple.
We could possibly teach all these places to use something other than
CurrentMemoryContext for their allocations, but it doesn't look like an
appetizing prospect. It looks tedious and highly error-prone, both of
which are adjectives I'd hoped to avoid for this project.
What I'm currently considering instead is to still create a per-tuple
context for each plan node, but use it only for expression evaluation,
ie, we switch into it on entry to ExecQual(), ExecTargetList(),
ExecProject(), maybe a few other places. The majority of our leakage
problems are associated with expression evaluation, so this should allow
fixing the leakage problems. It will mean that routines associated with
plan nodes (basically, executor/node*.c) will still need to be careful
to avoid leaks. For the most part they are already, but I had hoped to
make that care less necessary.
Comments, better ideas?
regards, tom lane