Re: generic plans and "initial" pruning - Mailing list pgsql-hackers
| From | Amit Langote |
|---|---|
| Subject | Re: generic plans and "initial" pruning |
| Date | |
| Msg-id | CA+HiwqHni0Cv85ZY6QPFBtTutzCU+GGVYn0n4hGU_kSF7oyJYA@mail.gmail.com Whole thread Raw |
| In response to | Re: generic plans and "initial" pruning (Tender Wang <tndrwang@gmail.com>) |
| List | pgsql-hackers |
On Sun, Nov 23, 2025 at 9:17 PM Tender Wang <tndrwang@gmail.com> wrote: > Amit Langote <amitlangote09@gmail.com> 于2025年11月20日周四 15:30写道: >> >> On Mon, Nov 17, 2025 at 9:50 PM Amit Langote <amitlangote09@gmail.com> wrote: >> > On Wed, Nov 12, 2025 at 11:17 PM Amit Langote <amitlangote09@gmail.com> wrote: >> > > * Enable pruning-aware locking in cached / generic plan reuse (0004): >> > > extends GetCachedPlan() and CheckCachedPlan() to call ExecutorPrep() >> > > on each PlannedStmt in the CachedPlan, locking only surviving >> > > partitions. Adds CachedPlanPrepData to pass this through plan cache >> > > APIs and down to execution via QueryDesc. Also reinstates the >> > > firstResultRel locking rule added in 28317de72 but later lost due to >> > > revert of the earlier pruning patch, to ensure correctness when all >> > > target partitions are pruned. >> > >> > Looking at the changes to executor/function.c, I also noticed that I >> > had mistakenly allocated the ExecutorPrep state in >> > SQLFunctionCache.fcontext whereas the correct context for execution >> > related state is SQLFunctionCache.subcontext. In the updated patch, >> > I've made postquel_start() reparent the prep EState's es_query_cxt to >> > subcontext from fcontext. I also did not have a test case that >> > exercised cached plan reuse for SQL functions, so I added one. I split >> > the function.c's GetCachedPlan() + CachedPlanPrepData plumbing into a >> > new patch 0005 so it can be reviewed separately, since it is the only >> > non-mechanical call-site change. >> >> I also noticed a bug in the prep cleanup logic that runs when a cached >> plan becomes invalid during the prep phase. Patch 0005 fixes that and >> adds a regression test that exercises the invalidation path. This will >> be folded into 0004 later. > > I spent time looking at these patches. > > I search all places that call GetCachedPlan(), and we always pass &cprep(CachedPlanPrepData) to GetCachedPlan(). > In PrepAndCheckCachedPlan(), if the plan_cache_mode is force_generic_plan, the LockPolicy is always LOCK_UNPRUNED. Because*cprep has never been NULL. > It seems that the LockPolicy has no chance to be LOCK_ALL. Do I miss something here? Yes, eventually LockPolicy may end up redundant and we might not need AcquireExecutorLocksPolicy() at all, with a single locking path covering both cases. My goal initially was to stage the changes across call sites: keep a LOCK_ALL path for callers that still use the old lock everything up front behaviour, and gradually convert other callers to pass a non-NULL CachedPlanPrepData and handle the prep_list it may return, so that GetCachedPlan() can perform LOCK_UNPRUNED locking internally. That is why GetCachedPlan() accepts a possibly NULL cprep and why LockPolicy exists as a separate knob. For example, I decided to split out function.c refactoring of plan cache usage into its own patch. That made me realise that new users of GetCachedPlan() may appear that first adopt the simpler LOCK_ALL behaviour and only later switch to UNPRUNED when pruning aware locking becomes useful for them. Keeping the two paths preserves that incremental route and avoids forcing every new user to adopt CachedPlanPrepData and UNPRUNED locking up front. I am undecided yet if that two path structure is a good idea, but I am inclined to keep it for now. I would be happy to hear opinions on this. -- Thanks, Amit Langote
pgsql-hackers by date: